Hybrid Arima - LSTM code

The hybrid ARIMA-LSTM model is open to a variety of experimentation. For ideal performance, a balance must be reached between the levels of volatility that work best for the ARIMA and LSTM models. Using shorter MA periods that result in a non-mesokurtic distribution may achieve a better volatility balance between models.

Import Libraries

In [ ]:
import pandas as pd
pd.set_option('display.max_rows', 500)
import timeit
In [ ]:
!pip install -q -U keras-tuner
     |████████████████████████████████| 98 kB 3.6 MB/s 
In [ ]:
import keras_tuner as kt
In [ ]:
!pip install pmdarima
Collecting pmdarima
  Downloading pmdarima-1.8.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (1.4 MB)
     |████████████████████████████████| 1.4 MB 5.1 MB/s 
Requirement already satisfied: scipy>=1.3.2 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.4.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.1.0)
Requirement already satisfied: Cython!=0.29.18,>=0.29 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (0.29.24)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.24.3)
Requirement already satisfied: scikit-learn>=0.22 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.0.1)
Collecting statsmodels!=0.12.0,>=0.11
  Downloading statsmodels-0.13.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (9.8 MB)
     |████████████████████████████████| 9.8 MB 23.7 MB/s 
Requirement already satisfied: pandas>=0.19 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.1.5)
Requirement already satisfied: setuptools!=50.0.0,>=38.6.0 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (57.4.0)
Requirement already satisfied: numpy>=1.19.3 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.19.5)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.19->pmdarima) (2018.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.19->pmdarima) (2.8.2)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas>=0.19->pmdarima) (1.15.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.22->pmdarima) (3.0.0)
Requirement already satisfied: patsy>=0.5.2 in /usr/local/lib/python3.7/dist-packages (from statsmodels!=0.12.0,>=0.11->pmdarima) (0.5.2)
Installing collected packages: statsmodels, pmdarima
  Attempting uninstall: statsmodels
    Found existing installation: statsmodels 0.10.2
    Uninstalling statsmodels-0.10.2:
      Successfully uninstalled statsmodels-0.10.2
Successfully installed pmdarima-1.8.4 statsmodels-0.13.1
In [ ]:
import pmdarima
In [ ]:
url = 'https://launchpad.net/~mario-mariomedina/+archive/ubuntu/talib/+files'
!wget $url/libta-lib0_0.4.0-oneiric1_amd64.deb -qO libta.deb
!wget $url/ta-lib0-dev_0.4.0-oneiric1_amd64.deb -qO ta.deb
!dpkg -i libta.deb ta.deb
!pip install ta-lib
import talib
Selecting previously unselected package libta-lib0.
(Reading database ... 155222 files and directories currently installed.)
Preparing to unpack libta.deb ...
Unpacking libta-lib0 (0.4.0-oneiric1) ...
Selecting previously unselected package ta-lib0-dev.
Preparing to unpack ta.deb ...
Unpacking ta-lib0-dev (0.4.0-oneiric1) ...
Setting up libta-lib0 (0.4.0-oneiric1) ...
Setting up ta-lib0-dev (0.4.0-oneiric1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1.3) ...
/sbin/ldconfig.real: /usr/local/lib/python3.7/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link

Collecting ta-lib
  Downloading TA-Lib-0.4.22.tar.gz (268 kB)
     |████████████████████████████████| 268 kB 5.2 MB/s 
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... done
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from ta-lib) (1.19.5)
Building wheels for collected packages: ta-lib
  Building wheel for ta-lib (PEP 517) ... done
  Created wheel for ta-lib: filename=TA_Lib-0.4.22-cp37-cp37m-linux_x86_64.whl size=1465696 sha256=fd07dcc6bd81b6649d73bbe63bbc5993a021f7026d99ecf058e277f8a8f80365
  Stored in directory: /root/.cache/pip/wheels/7b/63/a9/144081748d9c4f0a09b4670c7b3c414bcb33ff97f0724c753a
Successfully built ta-lib
Installing collected packages: ta-lib
Successfully installed ta-lib-0.4.22
In [ ]:
import tensorflow
import statsmodels.tsa.api
import keras
import sklearn
In [ ]:
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, LSTM, Dropout, Bidirectional,BatchNormalization, Embedding, TimeDistributed, LeakyReLU, GRU
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
In [ ]:
from keras.models import Sequential, load_model
from keras.layers import Dense, LSTM, Activation, Dropout
from keras import backend as K
from keras.utils.generic_utils import get_custom_objects
from keras.callbacks import ModelCheckpoint,EarlyStopping
from keras.regularizers import l1_l2
In [ ]:
import math
In [ ]:
from statsmodels.tsa.api import VAR
from statsmodels.tsa.statespace.varmax import VARMAX,VARMAXResults
In [ ]:
from sklearn.metrics import mean_squared_error, mean_absolute_percentage_error, mean_absolute_error
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
In [ ]:
from matplotlib import pyplot
In [ ]:
import json
import datetime
import pandas as pd
import numpy as np
import os
from scipy.stats import kurtosis
import pmdarima as pm
from pmdarima import auto_arima
from talib import abstract
import json
import matplotlib.pyplot as plt
# plt.rcParams.update({'font.size': 16})
from matplotlib.pyplot import figure
from numpy import array
from numpy import hstack
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dense
from keras.layers import RepeatVector
from keras.layers import TimeDistributed
In [ ]:
from keras.utils.generic_utils import get_custom_objects
from tensorflow.keras.utils import plot_model
In [ ]:
import warnings
from statsmodels.tools.sm_exceptions import ConvergenceWarning
warnings.simplefilter('ignore', ConvergenceWarning)

Load Data

In [ ]:
from google.colab import drive
drive.mount('/content/drive')
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
In [ ]:
cd drive/MyDrive/Stock price prediction/Generated datasets
/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Generated datasets
In [ ]:
df = pd.read_csv("FULL_Data_google_COVID_bull_bear.csv",parse_dates=[0])
df.tail(10)
Out[ ]:
Unnamed: 0 Unnamed: 0.1 Unnamed: 0.1.1 Unnamed: 0.1.1.1 Open High Low Close Adj Close Volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp Date search COVID positiveIncrease COVID deathIncrease bull score bear score fourier bull 10 fourier bull 30 fourier bear 10 fourier bear 30
1592 1592 1781 1781 1781 150.199997 151.429993 150.059998 150.809998 150.809998 56787900.0 150.565717 148.423811 -1.137777 2.817933 154.059677 142.787944 150.767809 5.009368 93.428749 -0.061228 100.779503 -0.039111 103.599003 -0.022436 2021-11-09 19 112313 1258 0.119141 0.111328 NaN NaN NaN NaN
1593 1593 1782 1782 1782 150.020004 150.130005 147.850006 147.919998 147.919998 65187100.0 150.417145 148.729049 -1.236913 2.144358 153.017766 144.440332 148.869268 4.989888 92.922909 -0.061683 99.694365 -0.039762 101.872301 -0.022657 2021-11-10 19 80301 1470 0.154297 0.109375 NaN NaN NaN NaN
1594 1594 1783 1783 1783 148.960007 149.429993 147.679993 147.869995 147.869995 41000000.0 150.110001 149.060477 -1.165047 1.767475 152.595428 145.525526 148.203086 4.989548 92.416471 -0.062129 98.604584 -0.040391 100.137594 -0.022839 2021-11-11 19 94975 1662 0.102845 0.126915 NaN NaN NaN NaN
1595 1595 1784 1784 1784 148.429993 150.399994 147.479996 149.990005 149.990005 63632600.0 149.895715 149.357144 -0.869308 1.420732 152.198608 146.515681 149.394365 5.003879 91.909483 -0.062566 97.510555 -0.040998 98.396260 -0.022980 2021-11-12 19 55499 797 0.157277 0.080595 NaN NaN NaN NaN
1596 1596 1785 1785 1785 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2021-11-13 19 146529 2505 0.139459 0.083243 NaN NaN NaN NaN
1597 1597 1786 1786 1786 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2021-11-14 19 40964 479 0.151261 0.100840 NaN NaN NaN NaN
1598 1598 1787 1787 1787 150.369995 151.880005 149.429993 150.000000 150.000000 59222800.0 149.758571 149.602859 -0.907641 1.229694 152.062246 147.143471 149.798122 5.003946 91.401994 -0.062993 96.412672 -0.041581 96.649685 -0.023077 2021-11-15 22 30290 148 0.136737 0.109389 NaN NaN NaN NaN
1599 1599 1788 1788 1788 149.940002 151.490005 149.339996 151.000000 151.000000 59256200.0 149.718571 149.814763 -0.791320 1.236243 152.287250 147.342277 150.599374 5.010635 90.894052 -0.063410 95.311334 -0.042140 94.899260 -0.023130 2021-11-16 22 138962 1294 0.135531 0.115385 NaN NaN NaN NaN
1600 1600 1789 1789 1789 151.000000 155.000000 150.990005 153.490005 153.490005 88807000.0 150.154286 150.040002 -0.657719 1.467121 152.974245 147.105759 152.526461 5.027099 90.385704 -0.063817 94.206941 -0.042673 93.146378 -0.023135 2021-11-17 22 87626 1290 0.100870 0.126957 NaN NaN NaN NaN
1601 1601 1790 1790 1790 153.710007 158.669998 153.050003 157.869995 157.869995 137659100.0 151.162857 150.450002 -0.609656 2.267825 154.985653 145.914351 156.088817 5.055417 89.877000 -0.064214 93.099895 -0.043179 91.392433 -0.023090 2021-11-18 22 111404 1637 0.145098 0.121569 NaN NaN NaN NaN
In [ ]:
cd ..
In [ ]:
cd Archana - LSTM Hybrid/Outputs/Baseline
/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Archana - LSTM Hybrid/Outputs/Baseline
In [ ]:
pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name().head(5)
Out[ ]:
0    Saturday
1      Sunday
3     Tuesday
7    Saturday
8      Sunday
Name: Date, dtype: object
In [ ]:
len(pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name())
Out[ ]:
497
In [ ]:
len(df)
Out[ ]:
1602
In [ ]:
len(df) - len(pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name())
Out[ ]:
1105
In [ ]:
df.dropna(inplace=True)
len(df)
Out[ ]:
1080
In [ ]:
pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name()
Out[ ]:
Series([], Name: Date, dtype: object)
In [ ]:
df.head(5)
Out[ ]:
Unnamed: 0 Unnamed: 0.1 Unnamed: 0.1.1 Unnamed: 0.1.1.1 Open High Low Close Adj Close Volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp Date search COVID positiveIncrease COVID deathIncrease bull score bear score fourier bull 10 fourier bull 30 fourier bear 10 fourier bear 30
2 2 191 191 191 36.220001 36.325001 35.775002 35.875000 34.054882 57111200.0 36.173571 36.751904 0.303356 0.960520 38.672945 34.830864 35.924548 3.551770 38.458011 0.046984 29.704545 0.102857 43.304973 -0.053955 2017-07-03 15 0 0 0.666667 0.000000 0.142778 0.146810 0.100537 0.099251
4 4 193 193 193 35.922501 36.197498 35.680000 36.022499 34.194897 86278400.0 36.095357 36.634762 0.328795 0.852735 38.340231 34.929292 35.989849 3.555991 38.240991 0.049445 29.954520 0.099254 43.438321 -0.053936 2017-07-05 15 0 0 0.400000 0.000000 0.144487 0.145833 0.100630 0.096361
5 5 194 194 194 35.755001 35.875000 35.602501 35.682499 33.872143 96515200.0 35.984999 36.495238 0.346702 0.677629 37.850495 35.139980 35.784949 3.546235 38.027974 0.051918 30.209839 0.095602 43.557403 -0.053820 2017-07-06 15 0 0 0.142857 0.142857 0.145346 0.145164 0.100672 0.094761
6 6 195 195 195 35.724998 36.187500 35.724998 36.044998 34.216255 76806800.0 36.001071 36.362023 0.387422 0.387634 37.137291 35.586756 35.958315 3.556633 37.818962 0.054401 30.470232 0.091907 43.662260 -0.053608 2017-07-07 15 0 0 0.333333 0.000000 0.146208 0.144377 0.100711 0.093072
9 9 198 198 198 36.027500 36.487499 35.842499 36.264999 34.425095 84362400.0 35.973571 36.243809 0.388315 0.308042 36.859893 35.627725 36.162771 3.562891 37.613953 0.056893 30.735430 0.088177 43.752965 -0.053302 2017-07-10 14 0 0 0.000000 0.000000 0.148802 0.141354 0.100808 0.087587
In [ ]:
stock_col= list(df.columns)
stock_col = stock_col[4:len(stock_col)]
In [ ]:
dataset_final = df[stock_col]
In [ ]:
dataset_final.head(5)
Out[ ]:
Open High Low Close Adj Close Volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp Date search COVID positiveIncrease COVID deathIncrease bull score bear score fourier bull 10 fourier bull 30 fourier bear 10 fourier bear 30
2 36.220001 36.325001 35.775002 35.875000 34.054882 57111200.0 36.173571 36.751904 0.303356 0.960520 38.672945 34.830864 35.924548 3.551770 38.458011 0.046984 29.704545 0.102857 43.304973 -0.053955 2017-07-03 15 0 0 0.666667 0.000000 0.142778 0.146810 0.100537 0.099251
4 35.922501 36.197498 35.680000 36.022499 34.194897 86278400.0 36.095357 36.634762 0.328795 0.852735 38.340231 34.929292 35.989849 3.555991 38.240991 0.049445 29.954520 0.099254 43.438321 -0.053936 2017-07-05 15 0 0 0.400000 0.000000 0.144487 0.145833 0.100630 0.096361
5 35.755001 35.875000 35.602501 35.682499 33.872143 96515200.0 35.984999 36.495238 0.346702 0.677629 37.850495 35.139980 35.784949 3.546235 38.027974 0.051918 30.209839 0.095602 43.557403 -0.053820 2017-07-06 15 0 0 0.142857 0.142857 0.145346 0.145164 0.100672 0.094761
6 35.724998 36.187500 35.724998 36.044998 34.216255 76806800.0 36.001071 36.362023 0.387422 0.387634 37.137291 35.586756 35.958315 3.556633 37.818962 0.054401 30.470232 0.091907 43.662260 -0.053608 2017-07-07 15 0 0 0.333333 0.000000 0.146208 0.144377 0.100711 0.093072
9 36.027500 36.487499 35.842499 36.264999 34.425095 84362400.0 35.973571 36.243809 0.388315 0.308042 36.859893 35.627725 36.162771 3.562891 37.613953 0.056893 30.735430 0.088177 43.752965 -0.053302 2017-07-10 14 0 0 0.000000 0.000000 0.148802 0.141354 0.100808 0.087587

Data Load for Experiment set 1 with Technical Indicators

In [ ]:
stock_col= list(df.columns)
stock_col = stock_col[4:len(stock_col)-9]
dataset_final = df[stock_col]
dataset_final.head(5)
Out[ ]:
Open High Low Close Adj Close Volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp Date
2 36.220001 36.325001 35.775002 35.875000 34.054882 57111200.0 36.173571 36.751904 0.303356 0.960520 38.672945 34.830864 35.924548 3.551770 38.458011 0.046984 29.704545 0.102857 43.304973 -0.053955 2017-07-03
4 35.922501 36.197498 35.680000 36.022499 34.194897 86278400.0 36.095357 36.634762 0.328795 0.852735 38.340231 34.929292 35.989849 3.555991 38.240991 0.049445 29.954520 0.099254 43.438321 -0.053936 2017-07-05
5 35.755001 35.875000 35.602501 35.682499 33.872143 96515200.0 35.984999 36.495238 0.346702 0.677629 37.850495 35.139980 35.784949 3.546235 38.027974 0.051918 30.209839 0.095602 43.557403 -0.053820 2017-07-06
6 35.724998 36.187500 35.724998 36.044998 34.216255 76806800.0 36.001071 36.362023 0.387422 0.387634 37.137291 35.586756 35.958315 3.556633 37.818962 0.054401 30.470232 0.091907 43.662260 -0.053608 2017-07-07
9 36.027500 36.487499 35.842499 36.264999 34.425095 84362400.0 35.973571 36.243809 0.388315 0.308042 36.859893 35.627725 36.162771 3.562891 37.613953 0.056893 30.735430 0.088177 43.752965 -0.053302 2017-07-10
In [ ]:
# Set the date to datetime data
datetime_series = pd.to_datetime(dataset_final['Date'])
datetime_index = pd.DatetimeIndex(datetime_series.values)
dataset_final = dataset_final.set_index(datetime_index)
dataset_final = dataset_final.sort_values(by='Date')
dataset_final = dataset_final.drop(columns='Date')
dataset_final.head(5)
Out[ ]:
Open High Low Close Adj Close Volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp
2017-07-03 36.220001 36.325001 35.775002 35.875000 34.054882 57111200.0 36.173571 36.751904 0.303356 0.960520 38.672945 34.830864 35.924548 3.551770 38.458011 0.046984 29.704545 0.102857 43.304973 -0.053955
2017-07-05 35.922501 36.197498 35.680000 36.022499 34.194897 86278400.0 36.095357 36.634762 0.328795 0.852735 38.340231 34.929292 35.989849 3.555991 38.240991 0.049445 29.954520 0.099254 43.438321 -0.053936
2017-07-06 35.755001 35.875000 35.602501 35.682499 33.872143 96515200.0 35.984999 36.495238 0.346702 0.677629 37.850495 35.139980 35.784949 3.546235 38.027974 0.051918 30.209839 0.095602 43.557403 -0.053820
2017-07-07 35.724998 36.187500 35.724998 36.044998 34.216255 76806800.0 36.001071 36.362023 0.387422 0.387634 37.137291 35.586756 35.958315 3.556633 37.818962 0.054401 30.470232 0.091907 43.662260 -0.053608
2017-07-10 36.027500 36.487499 35.842499 36.264999 34.425095 84362400.0 35.973571 36.243809 0.388315 0.308042 36.859893 35.627725 36.162771 3.562891 37.613953 0.056893 30.735430 0.088177 43.752965 -0.053302

Train & test Dataset for Multistep Process

In [ ]:
# Get features and target
X_value = pd.DataFrame(dataset_final.iloc[:, :])
y_value = pd.DataFrame(dataset_final.iloc[:, 3])
In [ ]:
y_value.head(5)
Out[ ]:
Close
2017-07-03 35.875000
2017-07-05 36.022499
2017-07-06 35.682499
2017-07-07 36.044998
2017-07-10 36.264999
In [ ]:
# Normalized the data
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
Out[ ]:
MinMaxScaler(feature_range=(-1, 1))
In [ ]:
X_scale_dataset = X_scaler.fit_transform(X_value)
y_scale_dataset = y_scaler.fit_transform(y_value)
In [ ]:
X_scale_dataset.shape, y_scale_dataset.shape,
Out[ ]:
((1080, 20), (1080, 1))
In [ ]:
X_value.shape[1]
Out[ ]:
20

N Steps Definition

In [ ]:
n_steps_in = 3
n_features = X_value.shape[1] #19 features
n_steps_out = 1
In [ ]:
# Reshape the data
'''Set the data input steps and output steps, 
    we use 30 days data to predict 1 day price here, 
    reshape it to (None, input_step, number of features) used for LSTM input'''
# Get X/y dataset
def get_X_y(X_data, y_data):
    X = list()
    y = list()
    yc = list()

    length = len(X_data)
    for i in range(0, length, 1):
        # pdb.set_trace()
        X_value = X_data[i: i + n_steps_in][:, :]
        # print('[',i,': ',i,' + ',n_steps_in,'][:, :]')
        y_value = y_data[i + n_steps_in: i + (n_steps_in + n_steps_out)][:, 0]
        # print('[',i,' + ',n_steps_in,': ',i,' + (',n_steps_in,' + ',n_steps_out,')][:, 0]')
        yc_value = y_data[i: i + n_steps_in][:, :]
        if len(X_value) == 3 and len(y_value) == 1:
            X.append(X_value)
            y.append(y_value)
            yc.append(yc_value)

    return np.array(X), np.array(y), np.array(yc)
In [ ]:
# get the train test predict index
def predict_index(dataset, X_train, n_steps_in, n_steps_out):

    # get the predict data (remove the in_steps days)
    train_predict_index = dataset.iloc[n_steps_in : X_train.shape[0] + n_steps_in + n_steps_out - 1, :].index
    test_predict_index = dataset.iloc[X_train.shape[0] + n_steps_in:, :].index

    return train_predict_index, test_predict_index
In [ ]:
def mean_absolute_percentage_error(actual, prediction):
    actual = pd.Series(actual)
    prediction = pd.Series(prediction)
    return 100 * np.mean(np.abs((actual - prediction))/actual)
In [ ]:
# Split train/test dataset
def split_train_test(data):
    train_size = round(len(X) * 0.75)
    data_train = data[0:train_size]
    data_test = data[train_size:]
    return data_train, data_test
In [ ]:
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
yc_train, yc_test, = split_train_test(yc)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
In [ ]:
# %% --------------------------------------- Save dataset -----------------------------------------------------------------
print('X shape: ', X.shape)
print('y shape: ', y.shape)
print('X_train shape: ', X_train.shape)
print('y_train shape: ', y_train.shape)
print('y_c_train shape: ', yc_train.shape)
print('X_test shape: ', X_test.shape)
print('y_test shape: ', y_test.shape)
print('y_c_test shape: ', yc_test.shape)
print('index_train shape:', index_train.shape)
print('index_test shape:', index_test.shape)
X shape:  (1077, 3, 20)
y shape:  (1077, 1)
X_train shape:  (808, 3, 20)
y_train shape:  (808, 1)
y_c_train shape:  (808, 3, 1)
X_test shape:  (269, 3, 20)
y_test shape:  (269, 1)
y_c_test shape:  (269, 3, 1)
index_train shape: (808,)
index_test shape: (269,)
In [ ]:
output_dim = y_train.shape[1]
output_dim
Out[ ]:
1
In [ ]:
df = dataset_final.copy()
In [ ]:
df.rename(columns={'Date':'date','Open':'open','Low':'low','Close':'close','Volume':'volume','High':'high'}, inplace = True)
df.reset_index(drop=True,inplace=True)
In [ ]:
df.head(1)
Out[ ]:
open high low close Adj Close volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp
0 36.220001 36.325001 35.775002 35.875 34.054882 57111200.0 36.173571 36.751904 0.303356 0.96052 38.672945 34.830864 35.924548 3.55177 38.458011 0.046984 29.704545 0.102857 43.304973 -0.053955
In [ ]:
# df.drop(['volume', 'MACD','20SD','logmomentum','absolute of 3 comp','angle of 3 comp','absolute of 6 comp','angle of 6 comp','absolute of 9 comp','angle of 9 comp'], axis='columns', inplace=True) # only keep columns that can help as residuals in Arima Hybrid
In [ ]:
df.head(1)
Out[ ]:
open high low close Adj Close volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp
0 36.220001 36.325001 35.775002 35.875 34.054882 57111200.0 36.173571 36.751904 0.303356 0.96052 38.672945 34.830864 35.924548 3.55177 38.458011 0.046984 29.704545 0.102857 43.304973 -0.053955

Train & Test Length

In [ ]:
test_len = len(X_test)
In [ ]:
train_len = len(X_train )
In [ ]:
test_len, train_len
Out[ ]:
(269, 808)

Kurtosis Review

In [ ]:
# Initialize moving averages from Ta-Lib, store functions in dictionary
# talib_moving_averages = ['SMA', 'EMA', 'WMA', 'DEMA', 'KAMA', 'MIDPOINT', 'MIDPRICE', 'T3', 'TEMA', 'TRIMA'] remove midprice due to outputbeing univariate
talib_moving_averages = ['SMA', 'EMA', 'WMA', 'DEMA', 'KAMA', 'MIDPOINT',  'T3', 'TEMA', 'TRIMA'] 
functions = {}
for ma in talib_moving_averages:
      functions[ma] = abstract.Function(ma)

    # Determine kurtosis "K" values for MA period 4-30
kurtosis_results = {'period': []}
for i in range(4, 100): # 100
  kurtosis_results['period'].append(i)
  for ma in talib_moving_averages:
              # Run moving average, remove last N days (used later for test data set), trim MA result to last 30 days
              ma_output = functions[ma](df[:-test_len], i).tail(14)
              # Determine kurtosis "K" value
              k = kurtosis(ma_output, fisher=False)
              # add to dictionary
              if ma not in kurtosis_results.keys():
                  kurtosis_results[ma] = []
              kurtosis_results[ma].append(k)

kurtosis_results = pd.DataFrame(kurtosis_results)
kurtosis_results.to_csv('kurtosis_results.csv')
In [ ]:
kurtosis_results.head(5)
Out[ ]:
period SMA EMA WMA DEMA KAMA MIDPOINT T3 TEMA TRIMA
0 4 2.272452 2.652772 2.896972 3.800351 2.299585 2.171369 1.978458 4.609342 2.411225
1 5 1.839451 2.355815 2.481058 3.327525 1.841282 1.826597 1.640277 4.262302 1.994382
2 6 1.583886 2.159532 2.194320 2.945924 1.536136 1.605787 1.510972 3.878845 1.679710
3 7 1.461290 2.026758 1.990629 2.651927 1.506197 1.558096 1.514015 3.510432 1.486348
4 8 1.447516 1.935302 1.853935 2.429648 1.509566 1.621595 1.601580 3.184123 1.373337

Optimized periods

In [ ]:
# Determine period with K closest to 3 +/-5%
optimized_period = {}
# https://pypi.org/project/TA-Lib/ determines the type of moving average to use
# https://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.at.html#pandas.DataFrame.at
for ma in talib_moving_averages:
        difference = np.abs(kurtosis_results[ma] - 3)
        df_arimahyb = pd.DataFrame({'difference': difference, 'period': kurtosis_results['period']})
        df_arimahyb = df_arimahyb.sort_values(by=['difference'], ascending=True).reset_index(drop=True)
        if df_arimahyb.at[0, 'difference'] < 3 * 0.05:
            optimized_period[ma] = df_arimahyb.at[0, 'period']
        else:
            print(ma + ' is not viable, best K greater or less than 3 +/-5%')

print('\nOptimized periods:', optimized_period)
TRIMA is not viable, best K greater or less than 3 +/-5%

Optimized periods: {'SMA': 17, 'EMA': 51, 'WMA': 49, 'DEMA': 89, 'KAMA': 18, 'MIDPOINT': 14, 'T3': 19, 'TEMA': 9}
In [ ]:
optimized_period
Out[ ]:
{'DEMA': 89,
 'EMA': 51,
 'KAMA': 18,
 'MIDPOINT': 14,
 'SMA': 17,
 'T3': 19,
 'TEMA': 9,
 'WMA': 49}

Simulation Keys

In [ ]:
simulation = {}
for ma in optimized_period:
        print(ma)
        print(functions[ma])
        print ( int( optimized_period[ma]))
        # if ma in ['EMA','WMA','DEMA','KAMA','MIDPOINT']:
        #   print(ma)
        low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
        low_vol = low_vol.fillna(0)
        high_vol = pd.DataFrame()
        df2 = df.copy()
        for i in df2.columns:
          if i in low_vol.columns:
            high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9
In [ ]:
low_vol.tail(20)
Out[ ]:
open high low close Adj Close volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp
1060 140.200839 141.942909 138.524500 140.171495 139.966842 8.852448e+07 142.165478 146.699207 1.815578 4.572948 155.845103 137.553312 140.365562 4.935800 105.739092 -0.047411 125.318767 -0.018291 140.471430 -0.008749
1061 139.425914 141.705469 138.035200 140.698014 140.492650 8.620711e+07 141.528981 145.978836 2.115887 4.189393 154.357621 137.600050 140.587196 4.939545 105.263514 -0.048037 124.464999 -0.019222 139.335869 -0.009472
1062 140.773058 142.636405 139.932338 141.733666 141.526843 7.421445e+07 141.294887 145.298477 2.211018 3.647690 152.593858 138.003097 141.351509 4.946870 104.786174 -0.048658 123.598217 -0.020150 138.164839 -0.010188
1063 142.179695 143.266994 141.127848 142.249061 142.041527 6.519616e+07 141.224295 144.665584 2.093072 3.241276 151.148137 138.183031 141.949877 4.950518 104.307114 -0.049275 122.718682 -0.021074 136.959041 -0.010898
1064 142.253947 144.008334 141.546689 142.555532 142.347589 6.254214e+07 141.336839 144.184381 1.988881 2.884864 149.954110 138.414652 142.353647 4.952685 103.826381 -0.049886 121.826667 -0.021994 135.719217 -0.011600
1065 142.782738 143.732491 141.438660 142.125353 141.918068 6.542511e+07 141.385297 143.758659 1.774804 2.626682 149.012024 138.505294 142.201451 4.949632 103.344020 -0.050491 120.922446 -0.022909 134.446150 -0.012293
1066 142.153085 142.656915 140.466684 141.564232 141.357788 7.040262e+07 141.585336 143.387397 1.634667 2.376817 148.141030 138.633764 141.776638 4.945637 102.860075 -0.051092 120.006305 -0.023818 133.140665 -0.012977
1067 142.177201 143.194327 140.977156 142.610382 142.402435 6.948112e+07 141.933749 143.094536 1.573317 2.074153 147.242842 138.946230 142.332468 4.953023 102.374593 -0.051687 119.078535 -0.024722 131.803627 -0.013650
1068 143.009006 144.052615 142.286776 143.812497 143.602819 6.805244e+07 142.378675 142.879716 1.473333 1.874158 146.628032 139.131400 143.319154 4.961467 101.887619 -0.052275 118.139433 -0.025618 130.435938 -0.014311
1069 143.380322 145.547752 142.940349 145.397429 145.185452 7.592729e+07 142.902069 142.813890 1.447641 1.844159 146.502207 139.125573 144.704671 4.972505 101.399198 -0.052858 117.189304 -0.026508 129.038540 -0.014959
1070 145.337970 147.615882 144.980528 147.444584 147.229635 7.653090e+07 143.644287 142.961273 1.284466 2.010227 146.981728 138.940819 146.531280 4.986604 100.909377 -0.053435 116.228458 -0.027389 127.612408 -0.015592
1071 147.375283 149.163050 146.995423 148.921380 148.704294 6.811986e+07 144.553694 143.236380 0.961952 2.270386 147.777152 138.695607 148.124680 4.996737 100.418203 -0.054006 115.257214 -0.028261 126.158555 -0.016211
1072 148.656821 150.010875 148.071943 149.870634 149.652170 6.425222e+07 145.660163 143.530869 0.589081 2.556352 148.643574 138.418164 149.288649 5.003230 99.925720 -0.054570 114.275894 -0.029124 124.678027 -0.016812
1073 149.806550 150.715254 149.026204 149.977942 149.759331 6.069918e+07 146.862121 143.785380 0.135134 2.805932 149.397244 138.173516 149.748178 5.003989 99.431976 -0.055128 113.284828 -0.029977 123.171903 -0.017396
1074 149.937482 150.666013 149.022091 149.911667 149.693162 5.465321e+07 147.905162 144.001463 -0.245163 3.045742 150.092948 137.909978 149.857170 5.003545 98.937018 -0.055679 112.284350 -0.030820 121.641290 -0.017961
1075 150.228161 151.254072 149.586503 150.104281 149.885502 5.602702e+07 148.803988 144.237215 -0.571069 3.270011 150.777237 137.697192 150.021910 5.004835 98.440892 -0.056223 111.274800 -0.031650 120.087330 -0.018506
1076 150.328251 150.997797 149.591175 149.912656 149.694163 5.484778e+07 149.449021 144.548659 -0.850904 3.458615 151.465890 137.631428 149.949074 5.003520 97.943645 -0.056759 110.256524 -0.032469 118.511190 -0.019029
1077 150.525566 152.430694 150.099878 151.531571 151.310718 7.580033e+07 150.032876 144.967153 -0.975625 3.719924 152.407001 137.527305 151.004072 5.014296 97.445324 -0.057289 109.229873 -0.033274 116.914063 -0.019528
1078 149.301052 151.688142 148.723104 151.137179 150.916905 1.012990e+08 150.349418 145.413317 -0.891585 3.905336 153.223988 137.602646 151.092810 5.011652 96.945977 -0.057811 108.195203 -0.034066 115.297171 -0.020004
1079 149.321425 151.018197 148.455004 150.396057 150.176865 9.262134e+07 150.424479 145.823313 -0.852689 3.878291 153.579894 138.066731 150.628308 5.006660 96.445650 -0.058325 107.152874 -0.034844 113.661756 -0.020453
In [ ]:
high_vol.head(10)
Out[ ]:
open high low close Adj Close volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp
0 36.220001 36.325001 35.775002 35.875000 34.054882 57111200.0 36.173571 36.751904 0.303356 0.960520 38.672945 34.830864 35.924548 3.551770 38.458011 0.046984 29.704545 0.102857 43.304973 -0.053955
1 35.922501 36.197498 35.680000 36.022499 34.194897 86278400.0 36.095357 36.634762 0.328795 0.852735 38.340231 34.929292 35.989849 3.555991 38.240991 0.049445 29.954520 0.099254 43.438321 -0.053936
2 35.755001 35.875000 35.602501 35.682499 33.872143 96515200.0 35.984999 36.495238 0.346702 0.677629 37.850495 35.139980 35.784949 3.546235 38.027974 0.051918 30.209839 0.095602 43.557403 -0.053820
3 35.724998 36.187500 35.724998 36.044998 34.216255 76806800.0 36.001071 36.362023 0.387422 0.387634 37.137291 35.586756 35.958315 3.556633 37.818962 0.054401 30.470232 0.091907 43.662260 -0.053608
4 36.027500 36.487499 35.842499 36.264999 34.425095 84362400.0 35.973571 36.243809 0.388315 0.308042 36.859893 35.627725 36.162771 3.562891 37.613953 0.056893 30.735430 0.088177 43.752965 -0.053302
5 36.182499 36.462502 36.095001 36.382500 34.536625 79127200.0 36.039642 36.202738 0.372153 0.308860 36.820458 35.585018 36.309257 3.566217 37.412947 0.059392 31.005161 0.084416 43.829622 -0.052901
6 36.467499 36.544998 36.205002 36.435001 34.586472 99538000.0 36.101071 36.206547 0.317572 0.295861 36.798268 35.614826 36.393086 3.567700 37.215939 0.061899 31.279154 0.080632 43.892360 -0.052406
7 36.375000 37.122501 36.360001 36.942501 35.068211 100797600.0 36.253571 36.220595 0.322643 0.340687 36.901969 35.539221 36.759363 3.581920 37.022928 0.064410 31.557136 0.076830 43.941338 -0.051818
8 36.992500 37.332500 36.832500 37.259998 35.369610 80528400.0 36.430357 36.266785 0.257925 0.410484 37.087753 35.445818 37.093120 3.590715 36.833908 0.066926 31.838833 0.073014 43.976744 -0.051137
9 37.205002 37.724998 37.142502 37.389999 35.493000 95174000.0 36.674285 36.329523 0.184267 0.445597 37.220717 35.438330 37.291039 3.594294 36.648875 0.069445 32.123972 0.069192 43.998789 -0.050365

Common Functions

In [152]:
def get_arima(dataframe,original_data, train_len, test_len):
    # prepare train and test data
    X_value = pd.DataFrame(dataframe.iloc[:, :])
    y_value = pd.DataFrame(dataframe.iloc[:, 3])
    X_train, X_test = split_train_test(X_value)
    y_train, y_test = split_train_test(y_value)
    yc_train,yc_test = split_train_test(original_data)
    # y_train_ = y_train['close'].to_list()
    # y_test_ = y_test['close'].to_list()
    yc = yc_test.values.tolist()
    y_train_list = y_train['close'].values.tolist() 
    y_test_list = y_test['close'].values.tolist()                                           
      
    # Initialize model
    model = auto_arima(y_train_list,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
                  suppress_warnings=True,stepwise=True,seasonal=True)
    print(model.summary())
        # Determine model parameters
    model.fit(y_train_list,disp= 0)
    order = model.get_params()['order']
    print('ARIMA order:', order, '\n')

        # Genereate predictions
    prediction = []
    for i in range(len(y_test_list)):
            model = pmdarima.ARIMA(order=order)
            model.fit(y_train_list,disp= 0)
            # print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')
            prediction.append(model.predict()[0])
            y_train_list.append(y_test_list[i])

    # Generate error data
    mse = mean_squared_error(yc_test, prediction)
    rmse = mse ** 0.5
    # mape = mean_absolute_percentage_error(pd.Series(yc_test).values.tolist(), pd.Series(predictionte).values.tolist() )
    mae = mean_absolute_error(pd.Series(yc_test).values.tolist() , pd.Series(prediction).values.tolist() )
    return yc, prediction, mse, rmse, mae
In [153]:
def plot_train(simulation,SIM):
  train_predict_index = np.load("index_train_appl.npy", allow_pickle=True)#Dates for train data

  predict_result = pd.DataFrame()
  for i in range(len(simulation[SIM]['final_tr']['prediction'])):
          y_predict = pd.DataFrame(simulation[SIM]['final_tr']['prediction'][i], columns=["predicted_price"],
                                  index=train_predict_index[i:i + output_dim])
          predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)
          
          #This is a dataframe with each column containing the predicted daily closing price
  real_price = pd.DataFrame()
  for i in range(len(simulation[SIM]['final_tr']['original'])):
          y_train = pd.DataFrame(simulation[SIM]['final_tr']['original'][i], columns=["real_price"],
                                index=train_predict_index[i:i + output_dim])
          real_price = pd.concat([real_price, y_train], axis=1, sort=False)  #This is a dataframe with each column containing the real daily closing price

  predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
  real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
      #
      # Plot the predicted result
  plt.figure(figsize=(16, 8))
  plt.plot(real_price["real_mean"])
  plt.plot(predict_result["predicted_mean"], color='r')
  plt.xlabel("Date")
  plt.ylabel("Stock price")
  plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
  plt.title(f"The result of Training for Hybrid Arima LSTM with MA - {SIM} : {fileimg}",fontsize=20)
  sf = fileimg+'_'+SIM+'Train Hybrid Arima LSTM Pred Out.png'
  plt.savefig(sf,dpi='figure')
  plt.show()

      # Calculate RMSE
  predicted = predict_result["predicted_mean"]
  real = real_price["real_mean"]
  RMSE = np.sqrt(mean_squared_error(predicted, real))
  MSE = mean_squared_error(predicted, real)
  MAE = mean_absolute_error(predicted, real)
  print(f"----- Train RMSE for {SIM} -----", RMSE)
  print(f"----- Train_MSE_LSTM for {SIM} -----", MSE)
  print(f"----- Train MAE LSTM for {SIM} -----", MAE)
In [154]:
def plot_test(simulation, SIM):
  test_predict_index = np.load("index_test_appl.npy", allow_pickle=True)#Dates for train data

      # rescaled_real_y = y_scaler.inverse_transform(y_train)#Real closing price data
      # rescaled_predicted_y = y_scaler.inverse_transform(train_yhat)#Predicted closing price data

  predict_result = pd.DataFrame()
  for i in range(len(simulation[SIM]['final']['prediction'])):
          y_predict = pd.DataFrame(simulation[SIM]['final']['prediction'][i], columns=["predicted_price"],
                                  index=test_predict_index[i:i + output_dim])
          predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)#This is a dataframe with each column containing the predicted daily closing price
      #
  real_price = pd.DataFrame()
  for i in range(len(simulation[SIM]['final']['original'])):
          y_train = pd.DataFrame(simulation[SIM]['final']['original'][i], columns=["real_price"],
                                index=test_predict_index[i:i + output_dim])
          real_price = pd.concat([real_price, y_train], axis=1, sort=False)#This is a dataframe with each column containing the real daily closing price

  predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
  real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
      #
      # Plot the predicted result
  plt.figure(figsize=(16, 8))
  plt.plot(real_price["real_mean"])
  plt.plot(predict_result["predicted_mean"], color='r')
  plt.xlabel("Date")
  plt.ylabel("Stock price")
  plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
  plt.title(f"The result of Testing for Hybrid Arima LSTM with MA - {SIM} : {fileimg}",fontsize=20)
  sf = fileimg+'_'+SIM+'Test Hybrid Arima LSTM Pred Out.png'
  plt.savefig(sf,dpi='figure')
  plt.show()

      # Calculate RMSE
  predicted = predict_result["predicted_mean"]
  real = real_price["real_mean"]
  RMSE = np.sqrt(mean_squared_error(predicted, real))
  MSE = mean_squared_error(predicted, real)
  MAE = mean_absolute_error(predicted, real)
  print(f"----- Test RMSE for {SIM}-----", RMSE)
  print(f"----- Test_MSE_LSTM for {SIM}-----", MSE)
  print(f"----- Test_MAE_LSTM for {SIM}-----", MAE)
In [155]:
def plot_train_high(simulation, SIM):
  train_predict_index = np.load("index_test_appl.npy", allow_pickle=True)#Dates for train data

  predict_result = pd.DataFrame()
  for i in range(len(simulation[SIM]['high_vol']['prediction'])):
          y_predict = pd.DataFrame(simulation[SIM]['high_vol']['prediction'][i], columns=["predicted_price"],
                                  index=train_predict_index[i:i + output_dim])
          predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)
          
          #This is a dataframe with each column containing the predicted daily closing price
  real_price = pd.DataFrame()
  for i in range(len(simulation[SIM]['high_vol']['original'])):
          y_train = pd.DataFrame(simulation[SIM]['high_vol']['original'][i], columns=["real_price"],
                                index=train_predict_index[i:i + output_dim])
          real_price = pd.concat([real_price, y_train], axis=1, sort=False)  #This is a dataframe with each column containing the real daily closing price

  predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
  real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
      #
      # Plot the predicted result
  plt.figure(figsize=(16, 8))
  plt.plot(real_price["real_mean"])
  plt.plot(predict_result["predicted_mean"], color='r')
  plt.xlabel("Date")
  plt.ylabel("Stock price")
  plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
  plt.title(f"The result of Training for {SIM}", fontsize=20)
  plt.show()

      # Calculate RMSE
  predicted = predict_result["predicted_mean"]
  real = real_price["real_mean"]
  RMSE = np.sqrt(mean_squared_error(predicted, real))
  MSE = mean_squared_error(predicted, real)
  MAE = mean_absolute_error(predicted, real)
  print(f"----- Individual LSTM RMSE for {SIM} -----", RMSE)
  print(f"----- Individual LSTM_MSE_LSTM for {SIM} -----", MSE)
  print(f"----- Individual LSTM MAE LSTM for {SIM} -----", MAE)
In [156]:
def plot_train_low(simulation , SIM):
  train_predict_index = np.load("index_test_appl.npy", allow_pickle=True)#Dates for train data

  predict_result = pd.DataFrame()
  for i in range(len(simulation[SIM]['low_vol']['prediction'])):
          y_predict = pd.DataFrame(simulation[SIM]['low_vol']['prediction'][i], columns=["predicted_price"],
                                  index=train_predict_index[i:i + output_dim])
          predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)
          
          #This is a dataframe with each column containing the predicted daily closing price
  real_price = pd.DataFrame()
  for i in range(len(simulation[SIM]['low_vol']['original'])):
          y_train = pd.DataFrame(simulation[SIM]['low_vol']['original'][i], columns=["real_price"],
                                index=train_predict_index[i:i + output_dim])
          real_price = pd.concat([real_price, y_train], axis=1, sort=False)  #This is a dataframe with each column containing the real daily closing price

  predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
  real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
      #
      # Plot the predicted result
  plt.figure(figsize=(16, 8))
  plt.plot(real_price["real_mean"])
  plt.plot(predict_result["predicted_mean"], color='r')
  plt.xlabel("Date")
  plt.ylabel("Stock price")
  plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
  plt.title(f"The result of Training for {SIM}", fontsize=20)
  plt.show()

      # Calculate RMSE
  predicted = predict_result["predicted_mean"]
  real = real_price["real_mean"]
  RMSE = np.sqrt(mean_squared_error(predicted, real))
  MSE = mean_squared_error(predicted, real)
  MAE = mean_absolute_error(predicted, real)
  print(f"-----Arima RMSE for {SIM} -----", RMSE)
  print(f"----- Arima MSE for {SIM} -----", MSE)
  print(f"----- Arima MAE for {SIM} -----", MAE)

Univariate Arima Multistep MutiVariate LSTM Hybrid Model Experiment 1

In [ ]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det = 20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # Option 1
    # Set up & fit LSTM RNN
    model = Sequential()
    model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    model.add(Dense(units=64,activation='relu'))
    model.add(Dropout(0.5))
    model.add(Dense(units=output_dim))
    model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    ## Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()


    # # option 2
    # model = Sequential()
    # model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    # model.add(Dense(64))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 3
    # define custom activation
    # 
    # class Double_Tanh(Activation):
    #     def __init__(self, activation, **kwargs):
    #         super(Double_Tanh, self).__init__(activation, **kwargs)
    #         self.__name__ = 'double_tanh'

    # def double_tanh(x):
    #     return (K.tanh(x) * 2)

    # get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
    #     # Model Generation
    # model = Sequential()
    # #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    # model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    # model.add(Dense(1))
    # model.add(Activation(double_tanh))
    # model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
    # model.add(LSTM(units=int(lstm_len/2)))
    # model.add(Dense(1, activation='sigmoid'))
    # model.compile(loss='mean_squared_error', optimizer='adam')
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [ ]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation1 = {}
    imgfile = 'Experiment1'
    for ma in optimized_period:
              print(ma)
              print(functions[ma])
              print ( int( optimized_period[ma]))
            # if ma == 'SMA':
              low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
              low_vol = low_vol.fillna(0)
              low_vol_data = df['close']
              high_vol = pd.DataFrame()
              df2 = df.copy()
              for i in df2.columns:
                if i in low_vol.columns:
                  high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
              high_vol_data = df['close']
              ## *****************************************************
              # Generate ARIMA and LSTM predictions
              print('\nWorking on ' + ma + ' predictions')
              try:
                print('parameters used : ', train_len, test_len)
                low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
              except:
                  print('ARIMA error, skipping to next MA type')
                  continue
              Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
              final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
              mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
              rmse_ftr = mse_ftr ** 0.5
              mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
              mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

              final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
              mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
              rmse = mse ** 0.5
              mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              # Generate prediction accuracy
              actual = df['close'].tail(test_len).values
              result_1 = []
              result_2 = []
              for i in range(1, len(final_prediction)):
                  # Compare prediction to previous close price
                  if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                      result_1.append(1)
                  elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                      result_1.append(1)
                  else:
                      result_1.append(0)

                  # Compare prediction to previous prediction
                  if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                      result_2.append(1)
                  elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                      result_2.append(1)
                  else:
                      result_2.append(0)

              accuracy_1 = np.mean(result_1)
              accuracy_2 = np.mean(result_2)

              simulation1[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                            'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                            'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                            'rmse': rmse_ftr, 'mae' : mae_ftr},
                                'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                          'rmse': rmse, 'mae': mae },
                                'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

              # save simulation data here as checkpoint
              with open('simulation1_data.json', 'w') as fp:
                  json.dump(simulation1, fp)

              for ma in simulation1.keys():
                  print('\n' + ma)
                  print('Prediction vs Close:\t\t' + str(round(100*simulation1[ma]['accuracy']['prediction vs close'], 2))
                        + '% Accuracy')
                  print('Prediction vs Prediction:\t' + str(round(100*simulation1[ma]['accuracy']['prediction vs prediction'], 2))
                        + '% Accuracy')
                  print('MSE:\t', simulation1[ma]['final']['mse'],
                        '\nRMSE:\t', simulation1[ma]['final']['rmse'],
                        '\nMAPE:\t', simulation1[ma]['final']['mae'])#,
                        # '\nMAPE:\t', simulation[ma]['final']['mape'])
            # else:
            #   break
            # code you want to evaluate
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.77 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4157.020, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3687.148, Time=0.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.26 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3458.651, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3322.133, Time=0.14 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.93 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.06 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3324.133, Time=0.26 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.618 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1657.067
Date:                Sun, 12 Dec 2021   AIC                           3322.133
Time:                        18:26:16   BIC                           3340.897
Sample:                             0   HQIC                          3329.339
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1966      0.003   -387.226      0.000      -1.203      -1.191
ar.L2         -0.8952      0.006   -138.692      0.000      -0.908      -0.883
ar.L3         -0.3968      0.006    -68.284      0.000      -0.408      -0.385
sigma2         3.5858      0.017    214.535      0.000       3.553       3.619
===================================================================================
Ljung-Box (L1) (Q):                  14.47   Jarque-Bera (JB):           2428881.42
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       271.99
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.43870, saving model to LSTM1.h5
48/48 - 4s - loss: 0.1945 - val_loss: 0.4387 - lr: 0.0010 - 4s/epoch - 84ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.43870 to 0.18463, saving model to LSTM1.h5
48/48 - 1s - loss: 0.1534 - val_loss: 0.1846 - lr: 0.0010 - 591ms/epoch - 12ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.18463
48/48 - 1s - loss: 0.0812 - val_loss: 2.0345 - lr: 0.0010 - 635ms/epoch - 13ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.18463
48/48 - 1s - loss: 0.0473 - val_loss: 0.5210 - lr: 0.0010 - 565ms/epoch - 12ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.18463 to 0.15949, saving model to LSTM1.h5
48/48 - 1s - loss: 0.0559 - val_loss: 0.1595 - lr: 0.0010 - 612ms/epoch - 13ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.15949
48/48 - 1s - loss: 0.0491 - val_loss: 0.3984 - lr: 0.0010 - 602ms/epoch - 13ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.15949
48/48 - 1s - loss: 0.0394 - val_loss: 0.2050 - lr: 0.0010 - 585ms/epoch - 12ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.15949
48/48 - 1s - loss: 0.0376 - val_loss: 0.1698 - lr: 0.0010 - 583ms/epoch - 12ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.15949
48/48 - 1s - loss: 0.0406 - val_loss: 0.2129 - lr: 0.0010 - 576ms/epoch - 12ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.15949 to 0.14004, saving model to LSTM1.h5
48/48 - 1s - loss: 0.0386 - val_loss: 0.1400 - lr: 0.0010 - 614ms/epoch - 13ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.14004 to 0.12725, saving model to LSTM1.h5
48/48 - 1s - loss: 0.0345 - val_loss: 0.1273 - lr: 0.0010 - 600ms/epoch - 12ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.12725
48/48 - 1s - loss: 0.0316 - val_loss: 0.1812 - lr: 0.0010 - 574ms/epoch - 12ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.12725 to 0.08557, saving model to LSTM1.h5
48/48 - 1s - loss: 0.0349 - val_loss: 0.0856 - lr: 0.0010 - 640ms/epoch - 13ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.08557
48/48 - 1s - loss: 0.0358 - val_loss: 0.1558 - lr: 0.0010 - 548ms/epoch - 11ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.08557
48/48 - 1s - loss: 0.0312 - val_loss: 0.1101 - lr: 0.0010 - 570ms/epoch - 12ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.08557
48/48 - 1s - loss: 0.0355 - val_loss: 0.3135 - lr: 0.0010 - 548ms/epoch - 11ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.08557 to 0.05956, saving model to LSTM1.h5
48/48 - 1s - loss: 0.0336 - val_loss: 0.0596 - lr: 0.0010 - 584ms/epoch - 12ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.05956
48/48 - 1s - loss: 0.0349 - val_loss: 0.2178 - lr: 0.0010 - 617ms/epoch - 13ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.05956 to 0.02770, saving model to LSTM1.h5
48/48 - 1s - loss: 0.0304 - val_loss: 0.0277 - lr: 0.0010 - 569ms/epoch - 12ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.02770
48/48 - 1s - loss: 0.0321 - val_loss: 0.0755 - lr: 0.0010 - 562ms/epoch - 12ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.02770
48/48 - 1s - loss: 0.0316 - val_loss: 0.0277 - lr: 0.0010 - 583ms/epoch - 12ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.02770
48/48 - 1s - loss: 0.0389 - val_loss: 0.0659 - lr: 0.0010 - 591ms/epoch - 12ms/step
Epoch 23/500

Epoch 00023: val_loss improved from 0.02770 to 0.01428, saving model to LSTM1.h5
48/48 - 1s - loss: 0.0408 - val_loss: 0.0143 - lr: 0.0010 - 612ms/epoch - 13ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01428
48/48 - 1s - loss: 0.0433 - val_loss: 0.0496 - lr: 0.0010 - 573ms/epoch - 12ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01428
48/48 - 1s - loss: 0.0435 - val_loss: 0.0146 - lr: 0.0010 - 580ms/epoch - 12ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01428
48/48 - 1s - loss: 0.0401 - val_loss: 0.0492 - lr: 0.0010 - 591ms/epoch - 12ms/step
Epoch 27/500

Epoch 00027: val_loss improved from 0.01428 to 0.01293, saving model to LSTM1.h5
48/48 - 1s - loss: 0.0338 - val_loss: 0.0129 - lr: 0.0010 - 588ms/epoch - 12ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0262 - val_loss: 0.0773 - lr: 0.0010 - 583ms/epoch - 12ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0275 - val_loss: 0.0501 - lr: 0.0010 - 576ms/epoch - 12ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0232 - val_loss: 0.2249 - lr: 0.0010 - 593ms/epoch - 12ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0225 - val_loss: 0.0237 - lr: 0.0010 - 557ms/epoch - 12ms/step
Epoch 32/500

Epoch 00032: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00032: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0208 - val_loss: 0.1209 - lr: 0.0010 - 587ms/epoch - 12ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0185 - val_loss: 0.1019 - lr: 1.0000e-04 - 572ms/epoch - 12ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0181 - val_loss: 0.0896 - lr: 1.0000e-04 - 539ms/epoch - 11ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0174 - val_loss: 0.0775 - lr: 1.0000e-04 - 579ms/epoch - 12ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0164 - val_loss: 0.0688 - lr: 1.0000e-04 - 592ms/epoch - 12ms/step
Epoch 37/500

Epoch 00037: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00037: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0164 - val_loss: 0.0572 - lr: 1.0000e-04 - 582ms/epoch - 12ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0182 - val_loss: 0.0561 - lr: 1.0000e-05 - 567ms/epoch - 12ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0163 - val_loss: 0.0555 - lr: 1.0000e-05 - 570ms/epoch - 12ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0178 - val_loss: 0.0545 - lr: 1.0000e-05 - 603ms/epoch - 13ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0163 - val_loss: 0.0541 - lr: 1.0000e-05 - 567ms/epoch - 12ms/step
Epoch 42/500

Epoch 00042: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00042: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0166 - val_loss: 0.0540 - lr: 1.0000e-05 - 562ms/epoch - 12ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0143 - val_loss: 0.0540 - lr: 1.0000e-05 - 613ms/epoch - 13ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0176 - val_loss: 0.0529 - lr: 1.0000e-05 - 592ms/epoch - 12ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0173 - val_loss: 0.0521 - lr: 1.0000e-05 - 575ms/epoch - 12ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0159 - val_loss: 0.0522 - lr: 1.0000e-05 - 600ms/epoch - 12ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0158 - val_loss: 0.0517 - lr: 1.0000e-05 - 613ms/epoch - 13ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0157 - val_loss: 0.0511 - lr: 1.0000e-05 - 602ms/epoch - 13ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0158 - val_loss: 0.0507 - lr: 1.0000e-05 - 586ms/epoch - 12ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0155 - val_loss: 0.0506 - lr: 1.0000e-05 - 550ms/epoch - 11ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0198 - val_loss: 0.0503 - lr: 1.0000e-05 - 563ms/epoch - 12ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0163 - val_loss: 0.0498 - lr: 1.0000e-05 - 564ms/epoch - 12ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0162 - val_loss: 0.0502 - lr: 1.0000e-05 - 612ms/epoch - 13ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0161 - val_loss: 0.0494 - lr: 1.0000e-05 - 575ms/epoch - 12ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0169 - val_loss: 0.0494 - lr: 1.0000e-05 - 601ms/epoch - 13ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0169 - val_loss: 0.0490 - lr: 1.0000e-05 - 547ms/epoch - 11ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0163 - val_loss: 0.0484 - lr: 1.0000e-05 - 577ms/epoch - 12ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0163 - val_loss: 0.0484 - lr: 1.0000e-05 - 632ms/epoch - 13ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0150 - val_loss: 0.0474 - lr: 1.0000e-05 - 561ms/epoch - 12ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0150 - val_loss: 0.0479 - lr: 1.0000e-05 - 588ms/epoch - 12ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0189 - val_loss: 0.0477 - lr: 1.0000e-05 - 593ms/epoch - 12ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0167 - val_loss: 0.0473 - lr: 1.0000e-05 - 583ms/epoch - 12ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0160 - val_loss: 0.0478 - lr: 1.0000e-05 - 573ms/epoch - 12ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0155 - val_loss: 0.0472 - lr: 1.0000e-05 - 568ms/epoch - 12ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0165 - val_loss: 0.0464 - lr: 1.0000e-05 - 599ms/epoch - 12ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0149 - val_loss: 0.0467 - lr: 1.0000e-05 - 604ms/epoch - 13ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0159 - val_loss: 0.0468 - lr: 1.0000e-05 - 567ms/epoch - 12ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0160 - val_loss: 0.0467 - lr: 1.0000e-05 - 602ms/epoch - 13ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0166 - val_loss: 0.0459 - lr: 1.0000e-05 - 601ms/epoch - 13ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0175 - val_loss: 0.0463 - lr: 1.0000e-05 - 588ms/epoch - 12ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0170 - val_loss: 0.0446 - lr: 1.0000e-05 - 558ms/epoch - 12ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0163 - val_loss: 0.0450 - lr: 1.0000e-05 - 555ms/epoch - 12ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0160 - val_loss: 0.0437 - lr: 1.0000e-05 - 608ms/epoch - 13ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0177 - val_loss: 0.0449 - lr: 1.0000e-05 - 616ms/epoch - 13ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0161 - val_loss: 0.0453 - lr: 1.0000e-05 - 614ms/epoch - 13ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0155 - val_loss: 0.0441 - lr: 1.0000e-05 - 593ms/epoch - 12ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.01293
48/48 - 1s - loss: 0.0151 - val_loss: 0.0439 - lr: 1.0000e-05 - 583ms/epoch - 12ms/step
Epoch 00077: early stopping
SMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 36.8316918621851 
RMSE:	 6.068911917484476 
MAPE:	 5.0566072358251795
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.54 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4231.556, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3761.238, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.40 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3532.227, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3394.496, Time=0.12 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.16 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.86 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3396.496, Time=0.29 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.587 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1693.248
Date:                Sun, 12 Dec 2021   AIC                           3394.496
Time:                        18:29:10   BIC                           3413.260
Sample:                             0   HQIC                          3401.702
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.569      0.000      -1.204      -1.192
ar.L2         -0.8976      0.006   -139.811      0.000      -0.910      -0.885
ar.L3         -0.3984      0.006    -68.662      0.000      -0.410      -0.387
sigma2         3.9230      0.018    215.372      0.000       3.887       3.959
===================================================================================
Ljung-Box (L1) (Q):                  14.54   Jarque-Bera (JB):           2462173.05
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.82
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_1 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_1 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.15300, saving model to LSTM1.h5
16/16 - 2s - loss: 0.5670 - val_loss: 0.1530 - lr: 0.0010 - 2s/epoch - 130ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.15300
16/16 - 0s - loss: 0.1440 - val_loss: 0.8880 - lr: 0.0010 - 236ms/epoch - 15ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.15300 to 0.06431, saving model to LSTM1.h5
16/16 - 0s - loss: 0.0660 - val_loss: 0.0643 - lr: 0.0010 - 247ms/epoch - 15ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.06431
16/16 - 0s - loss: 0.0494 - val_loss: 0.1855 - lr: 0.0010 - 219ms/epoch - 14ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.06431
16/16 - 0s - loss: 0.0438 - val_loss: 0.1460 - lr: 0.0010 - 219ms/epoch - 14ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.06431
16/16 - 0s - loss: 0.0443 - val_loss: 0.0735 - lr: 0.0010 - 192ms/epoch - 12ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.06431
16/16 - 0s - loss: 0.0427 - val_loss: 0.0684 - lr: 0.0010 - 226ms/epoch - 14ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.06431 to 0.04310, saving model to LSTM1.h5
16/16 - 0s - loss: 0.0372 - val_loss: 0.0431 - lr: 0.0010 - 259ms/epoch - 16ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04310
16/16 - 0s - loss: 0.0350 - val_loss: 0.0449 - lr: 0.0010 - 204ms/epoch - 13ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.04310 to 0.03645, saving model to LSTM1.h5
16/16 - 0s - loss: 0.0329 - val_loss: 0.0364 - lr: 0.0010 - 237ms/epoch - 15ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.03645 to 0.01956, saving model to LSTM1.h5
16/16 - 0s - loss: 0.0345 - val_loss: 0.0196 - lr: 0.0010 - 278ms/epoch - 17ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.01956
16/16 - 0s - loss: 0.0361 - val_loss: 0.0223 - lr: 0.0010 - 240ms/epoch - 15ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.01956
16/16 - 0s - loss: 0.0340 - val_loss: 0.0550 - lr: 0.0010 - 202ms/epoch - 13ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.01956 to 0.01270, saving model to LSTM1.h5
16/16 - 0s - loss: 0.0336 - val_loss: 0.0127 - lr: 0.0010 - 242ms/epoch - 15ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0352 - val_loss: 0.0577 - lr: 0.0010 - 232ms/epoch - 15ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0303 - val_loss: 0.0152 - lr: 0.0010 - 231ms/epoch - 14ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0379 - val_loss: 0.0353 - lr: 0.0010 - 237ms/epoch - 15ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0287 - val_loss: 0.0162 - lr: 0.0010 - 231ms/epoch - 14ms/step
Epoch 19/500

Epoch 00019: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00019: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0298 - val_loss: 0.0153 - lr: 0.0010 - 231ms/epoch - 14ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0278 - val_loss: 0.0160 - lr: 1.0000e-04 - 207ms/epoch - 13ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0272 - val_loss: 0.0158 - lr: 1.0000e-04 - 225ms/epoch - 14ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0314 - val_loss: 0.0156 - lr: 1.0000e-04 - 189ms/epoch - 12ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0297 - val_loss: 0.0149 - lr: 1.0000e-04 - 247ms/epoch - 15ms/step
Epoch 24/500

Epoch 00024: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00024: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0311 - val_loss: 0.0154 - lr: 1.0000e-04 - 207ms/epoch - 13ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0291 - val_loss: 0.0156 - lr: 1.0000e-05 - 220ms/epoch - 14ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0278 - val_loss: 0.0159 - lr: 1.0000e-05 - 227ms/epoch - 14ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0273 - val_loss: 0.0160 - lr: 1.0000e-05 - 220ms/epoch - 14ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0293 - val_loss: 0.0161 - lr: 1.0000e-05 - 209ms/epoch - 13ms/step
Epoch 29/500

Epoch 00029: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00029: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0285 - val_loss: 0.0161 - lr: 1.0000e-05 - 258ms/epoch - 16ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0284 - val_loss: 0.0161 - lr: 1.0000e-05 - 222ms/epoch - 14ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0286 - val_loss: 0.0161 - lr: 1.0000e-05 - 203ms/epoch - 13ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0296 - val_loss: 0.0162 - lr: 1.0000e-05 - 241ms/epoch - 15ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0276 - val_loss: 0.0162 - lr: 1.0000e-05 - 227ms/epoch - 14ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0279 - val_loss: 0.0162 - lr: 1.0000e-05 - 217ms/epoch - 14ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0306 - val_loss: 0.0162 - lr: 1.0000e-05 - 225ms/epoch - 14ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0299 - val_loss: 0.0162 - lr: 1.0000e-05 - 214ms/epoch - 13ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0293 - val_loss: 0.0162 - lr: 1.0000e-05 - 220ms/epoch - 14ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0266 - val_loss: 0.0161 - lr: 1.0000e-05 - 210ms/epoch - 13ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0308 - val_loss: 0.0162 - lr: 1.0000e-05 - 235ms/epoch - 15ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0282 - val_loss: 0.0162 - lr: 1.0000e-05 - 216ms/epoch - 14ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0296 - val_loss: 0.0163 - lr: 1.0000e-05 - 219ms/epoch - 14ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0282 - val_loss: 0.0163 - lr: 1.0000e-05 - 227ms/epoch - 14ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0266 - val_loss: 0.0163 - lr: 1.0000e-05 - 212ms/epoch - 13ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0287 - val_loss: 0.0163 - lr: 1.0000e-05 - 231ms/epoch - 14ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0316 - val_loss: 0.0163 - lr: 1.0000e-05 - 206ms/epoch - 13ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0305 - val_loss: 0.0164 - lr: 1.0000e-05 - 210ms/epoch - 13ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0288 - val_loss: 0.0165 - lr: 1.0000e-05 - 220ms/epoch - 14ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0285 - val_loss: 0.0166 - lr: 1.0000e-05 - 228ms/epoch - 14ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0298 - val_loss: 0.0167 - lr: 1.0000e-05 - 222ms/epoch - 14ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0290 - val_loss: 0.0166 - lr: 1.0000e-05 - 215ms/epoch - 13ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0293 - val_loss: 0.0164 - lr: 1.0000e-05 - 226ms/epoch - 14ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0273 - val_loss: 0.0163 - lr: 1.0000e-05 - 212ms/epoch - 13ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0290 - val_loss: 0.0162 - lr: 1.0000e-05 - 235ms/epoch - 15ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0287 - val_loss: 0.0162 - lr: 1.0000e-05 - 213ms/epoch - 13ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0277 - val_loss: 0.0164 - lr: 1.0000e-05 - 226ms/epoch - 14ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0285 - val_loss: 0.0166 - lr: 1.0000e-05 - 202ms/epoch - 13ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0291 - val_loss: 0.0167 - lr: 1.0000e-05 - 220ms/epoch - 14ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0281 - val_loss: 0.0168 - lr: 1.0000e-05 - 214ms/epoch - 13ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0277 - val_loss: 0.0168 - lr: 1.0000e-05 - 220ms/epoch - 14ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0270 - val_loss: 0.0169 - lr: 1.0000e-05 - 218ms/epoch - 14ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0286 - val_loss: 0.0169 - lr: 1.0000e-05 - 202ms/epoch - 13ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0303 - val_loss: 0.0170 - lr: 1.0000e-05 - 260ms/epoch - 16ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0296 - val_loss: 0.0169 - lr: 1.0000e-05 - 219ms/epoch - 14ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.01270
16/16 - 0s - loss: 0.0288 - val_loss: 0.0168 - lr: 1.0000e-05 - 255ms/epoch - 16ms/step
Epoch 00064: early stopping
SMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 36.8316918621851 
RMSE:	 6.068911917484476 
MAPE:	 5.0566072358251795

EMA
Prediction vs Close:		55.6% Accuracy
Prediction vs Prediction:	39.93% Accuracy
MSE:	 71.66249742365495 
RMSE:	 8.465370483543822 
MAPE:	 6.945165499596413
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.56 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4264.089, Time=0.05 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3793.930, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.33 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3564.923, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3427.258, Time=0.11 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.68 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.62 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3429.258, Time=0.25 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.760 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1709.629
Date:                Sun, 12 Dec 2021   AIC                           3427.258
Time:                        18:30:56   BIC                           3446.021
Sample:                             0   HQIC                          3434.464
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1981      0.003   -389.386      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.699      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.737      0.000      -0.410      -0.387
sigma2         4.0860      0.019    215.311      0.000       4.049       4.123
===================================================================================
Ljung-Box (L1) (Q):                  14.57   Jarque-Bera (JB):           2460901.70
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.75
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_2 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_2 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.03032, saving model to LSTM1.h5
17/17 - 2s - loss: 0.1845 - val_loss: 0.0303 - lr: 0.0010 - 2s/epoch - 122ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.03032
17/17 - 0s - loss: 0.1137 - val_loss: 0.0971 - lr: 0.0010 - 242ms/epoch - 14ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.03032
17/17 - 0s - loss: 0.0670 - val_loss: 0.2708 - lr: 0.0010 - 258ms/epoch - 15ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.03032 to 0.00696, saving model to LSTM1.h5
17/17 - 0s - loss: 0.0519 - val_loss: 0.0070 - lr: 0.0010 - 276ms/epoch - 16ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0421 - val_loss: 0.0272 - lr: 0.0010 - 237ms/epoch - 14ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0333 - val_loss: 0.0647 - lr: 0.0010 - 229ms/epoch - 13ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0333 - val_loss: 0.0161 - lr: 0.0010 - 240ms/epoch - 14ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0290 - val_loss: 0.0175 - lr: 0.0010 - 216ms/epoch - 13ms/step
Epoch 9/500

Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00009: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0284 - val_loss: 0.0169 - lr: 0.0010 - 235ms/epoch - 14ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0265 - val_loss: 0.0179 - lr: 1.0000e-04 - 224ms/epoch - 13ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0229 - val_loss: 0.0175 - lr: 1.0000e-04 - 245ms/epoch - 14ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0252 - val_loss: 0.0175 - lr: 1.0000e-04 - 218ms/epoch - 13ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0252 - val_loss: 0.0193 - lr: 1.0000e-04 - 257ms/epoch - 15ms/step
Epoch 14/500

Epoch 00014: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00014: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0245 - val_loss: 0.0195 - lr: 1.0000e-04 - 213ms/epoch - 13ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0233 - val_loss: 0.0195 - lr: 1.0000e-05 - 236ms/epoch - 14ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0221 - val_loss: 0.0196 - lr: 1.0000e-05 - 252ms/epoch - 15ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0232 - val_loss: 0.0197 - lr: 1.0000e-05 - 225ms/epoch - 13ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0238 - val_loss: 0.0195 - lr: 1.0000e-05 - 244ms/epoch - 14ms/step
Epoch 19/500

Epoch 00019: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00019: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0240 - val_loss: 0.0196 - lr: 1.0000e-05 - 221ms/epoch - 13ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0243 - val_loss: 0.0195 - lr: 1.0000e-05 - 246ms/epoch - 14ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0237 - val_loss: 0.0195 - lr: 1.0000e-05 - 207ms/epoch - 12ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0231 - val_loss: 0.0195 - lr: 1.0000e-05 - 226ms/epoch - 13ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0228 - val_loss: 0.0193 - lr: 1.0000e-05 - 217ms/epoch - 13ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0231 - val_loss: 0.0193 - lr: 1.0000e-05 - 243ms/epoch - 14ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0229 - val_loss: 0.0194 - lr: 1.0000e-05 - 231ms/epoch - 14ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0234 - val_loss: 0.0193 - lr: 1.0000e-05 - 255ms/epoch - 15ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0245 - val_loss: 0.0194 - lr: 1.0000e-05 - 220ms/epoch - 13ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0233 - val_loss: 0.0196 - lr: 1.0000e-05 - 257ms/epoch - 15ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0237 - val_loss: 0.0194 - lr: 1.0000e-05 - 215ms/epoch - 13ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0228 - val_loss: 0.0195 - lr: 1.0000e-05 - 223ms/epoch - 13ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0236 - val_loss: 0.0194 - lr: 1.0000e-05 - 253ms/epoch - 15ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0212 - val_loss: 0.0190 - lr: 1.0000e-05 - 234ms/epoch - 14ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0223 - val_loss: 0.0188 - lr: 1.0000e-05 - 242ms/epoch - 14ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0210 - val_loss: 0.0189 - lr: 1.0000e-05 - 247ms/epoch - 15ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0233 - val_loss: 0.0190 - lr: 1.0000e-05 - 248ms/epoch - 15ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0244 - val_loss: 0.0190 - lr: 1.0000e-05 - 239ms/epoch - 14ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0239 - val_loss: 0.0192 - lr: 1.0000e-05 - 244ms/epoch - 14ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0227 - val_loss: 0.0193 - lr: 1.0000e-05 - 246ms/epoch - 14ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0252 - val_loss: 0.0197 - lr: 1.0000e-05 - 208ms/epoch - 12ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0235 - val_loss: 0.0199 - lr: 1.0000e-05 - 225ms/epoch - 13ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0227 - val_loss: 0.0197 - lr: 1.0000e-05 - 246ms/epoch - 14ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0236 - val_loss: 0.0192 - lr: 1.0000e-05 - 230ms/epoch - 14ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0238 - val_loss: 0.0188 - lr: 1.0000e-05 - 233ms/epoch - 14ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0225 - val_loss: 0.0189 - lr: 1.0000e-05 - 231ms/epoch - 14ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0221 - val_loss: 0.0193 - lr: 1.0000e-05 - 233ms/epoch - 14ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0232 - val_loss: 0.0192 - lr: 1.0000e-05 - 223ms/epoch - 13ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0228 - val_loss: 0.0193 - lr: 1.0000e-05 - 219ms/epoch - 13ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0225 - val_loss: 0.0191 - lr: 1.0000e-05 - 245ms/epoch - 14ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0232 - val_loss: 0.0196 - lr: 1.0000e-05 - 223ms/epoch - 13ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0218 - val_loss: 0.0200 - lr: 1.0000e-05 - 229ms/epoch - 13ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0224 - val_loss: 0.0203 - lr: 1.0000e-05 - 229ms/epoch - 13ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0219 - val_loss: 0.0207 - lr: 1.0000e-05 - 233ms/epoch - 14ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0216 - val_loss: 0.0207 - lr: 1.0000e-05 - 263ms/epoch - 15ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00696
17/17 - 0s - loss: 0.0227 - val_loss: 0.0209 - lr: 1.0000e-05 - 243ms/epoch - 14ms/step
Epoch 00054: early stopping
SMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 36.8316918621851 
RMSE:	 6.068911917484476 
MAPE:	 5.0566072358251795

EMA
Prediction vs Close:		55.6% Accuracy
Prediction vs Prediction:	39.93% Accuracy
MSE:	 71.66249742365495 
RMSE:	 8.465370483543822 
MAPE:	 6.945165499596413

WMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	44.4% Accuracy
MSE:	 54.1687035256009 
RMSE:	 7.359939097954609 
MAPE:	 6.023882427496638
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.56 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4436.126, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3965.317, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.52 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3736.589, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3598.951, Time=0.10 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.20 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.22 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3600.951, Time=0.25 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.047 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1795.475
Date:                Sun, 12 Dec 2021   AIC                           3598.951
Time:                        18:32:46   BIC                           3617.714
Sample:                             0   HQIC                          3606.157
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1983      0.003   -389.581      0.000      -1.204      -1.192
ar.L2         -0.8973      0.006   -139.732      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.649      0.000      -0.410      -0.387
sigma2         5.0573      0.023    215.292      0.000       5.011       5.103
===================================================================================
Ljung-Box (L1) (Q):                  14.41   Jarque-Bera (JB):           2460553.80
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.89
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.74
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_3 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_3 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.13774, saving model to LSTM1.h5
10/10 - 2s - loss: 1.0801 - val_loss: 0.1377 - lr: 0.0010 - 2s/epoch - 201ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.13774
10/10 - 0s - loss: 0.4486 - val_loss: 0.2985 - lr: 0.0010 - 147ms/epoch - 15ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.13774
10/10 - 0s - loss: 0.1941 - val_loss: 0.1863 - lr: 0.0010 - 138ms/epoch - 14ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.13774 to 0.08194, saving model to LSTM1.h5
10/10 - 0s - loss: 0.1112 - val_loss: 0.0819 - lr: 0.0010 - 184ms/epoch - 18ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.08194 to 0.02441, saving model to LSTM1.h5
10/10 - 0s - loss: 0.0727 - val_loss: 0.0244 - lr: 0.0010 - 167ms/epoch - 17ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.02441 to 0.01265, saving model to LSTM1.h5
10/10 - 0s - loss: 0.0702 - val_loss: 0.0126 - lr: 0.0010 - 188ms/epoch - 19ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.01265 to 0.01229, saving model to LSTM1.h5
10/10 - 0s - loss: 0.0573 - val_loss: 0.0123 - lr: 0.0010 - 182ms/epoch - 18ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0492 - val_loss: 0.0247 - lr: 0.0010 - 164ms/epoch - 16ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0429 - val_loss: 0.1001 - lr: 0.0010 - 158ms/epoch - 16ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0432 - val_loss: 0.0753 - lr: 0.0010 - 157ms/epoch - 16ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0382 - val_loss: 0.0260 - lr: 0.0010 - 140ms/epoch - 14ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00012: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0392 - val_loss: 0.0182 - lr: 0.0010 - 153ms/epoch - 15ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0363 - val_loss: 0.0203 - lr: 1.0000e-04 - 160ms/epoch - 16ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0337 - val_loss: 0.0235 - lr: 1.0000e-04 - 169ms/epoch - 17ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0339 - val_loss: 0.0266 - lr: 1.0000e-04 - 146ms/epoch - 15ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0328 - val_loss: 0.0297 - lr: 1.0000e-04 - 171ms/epoch - 17ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00017: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0334 - val_loss: 0.0337 - lr: 1.0000e-04 - 142ms/epoch - 14ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0297 - val_loss: 0.0341 - lr: 1.0000e-05 - 151ms/epoch - 15ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0323 - val_loss: 0.0348 - lr: 1.0000e-05 - 164ms/epoch - 16ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0326 - val_loss: 0.0350 - lr: 1.0000e-05 - 154ms/epoch - 15ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0332 - val_loss: 0.0350 - lr: 1.0000e-05 - 141ms/epoch - 14ms/step
Epoch 22/500

Epoch 00022: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00022: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0331 - val_loss: 0.0353 - lr: 1.0000e-05 - 152ms/epoch - 15ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0333 - val_loss: 0.0355 - lr: 1.0000e-05 - 149ms/epoch - 15ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0354 - val_loss: 0.0352 - lr: 1.0000e-05 - 147ms/epoch - 15ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0358 - val_loss: 0.0350 - lr: 1.0000e-05 - 142ms/epoch - 14ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0329 - val_loss: 0.0353 - lr: 1.0000e-05 - 177ms/epoch - 18ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0366 - val_loss: 0.0362 - lr: 1.0000e-05 - 147ms/epoch - 15ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0330 - val_loss: 0.0371 - lr: 1.0000e-05 - 152ms/epoch - 15ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0336 - val_loss: 0.0379 - lr: 1.0000e-05 - 159ms/epoch - 16ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0344 - val_loss: 0.0381 - lr: 1.0000e-05 - 144ms/epoch - 14ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0321 - val_loss: 0.0383 - lr: 1.0000e-05 - 134ms/epoch - 13ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0313 - val_loss: 0.0382 - lr: 1.0000e-05 - 174ms/epoch - 17ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0366 - val_loss: 0.0381 - lr: 1.0000e-05 - 149ms/epoch - 15ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0313 - val_loss: 0.0381 - lr: 1.0000e-05 - 154ms/epoch - 15ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0342 - val_loss: 0.0380 - lr: 1.0000e-05 - 162ms/epoch - 16ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0348 - val_loss: 0.0380 - lr: 1.0000e-05 - 144ms/epoch - 14ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0346 - val_loss: 0.0387 - lr: 1.0000e-05 - 157ms/epoch - 16ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0326 - val_loss: 0.0392 - lr: 1.0000e-05 - 155ms/epoch - 15ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0310 - val_loss: 0.0397 - lr: 1.0000e-05 - 151ms/epoch - 15ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0349 - val_loss: 0.0400 - lr: 1.0000e-05 - 174ms/epoch - 17ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0348 - val_loss: 0.0402 - lr: 1.0000e-05 - 154ms/epoch - 15ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0335 - val_loss: 0.0401 - lr: 1.0000e-05 - 156ms/epoch - 16ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0356 - val_loss: 0.0400 - lr: 1.0000e-05 - 149ms/epoch - 15ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0330 - val_loss: 0.0401 - lr: 1.0000e-05 - 145ms/epoch - 15ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0333 - val_loss: 0.0402 - lr: 1.0000e-05 - 147ms/epoch - 15ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0341 - val_loss: 0.0402 - lr: 1.0000e-05 - 150ms/epoch - 15ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0348 - val_loss: 0.0399 - lr: 1.0000e-05 - 139ms/epoch - 14ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0312 - val_loss: 0.0399 - lr: 1.0000e-05 - 169ms/epoch - 17ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0316 - val_loss: 0.0399 - lr: 1.0000e-05 - 150ms/epoch - 15ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0351 - val_loss: 0.0398 - lr: 1.0000e-05 - 147ms/epoch - 15ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0322 - val_loss: 0.0399 - lr: 1.0000e-05 - 165ms/epoch - 17ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0342 - val_loss: 0.0398 - lr: 1.0000e-05 - 189ms/epoch - 19ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0329 - val_loss: 0.0398 - lr: 1.0000e-05 - 159ms/epoch - 16ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0333 - val_loss: 0.0401 - lr: 1.0000e-05 - 163ms/epoch - 16ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0342 - val_loss: 0.0406 - lr: 1.0000e-05 - 136ms/epoch - 14ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0326 - val_loss: 0.0407 - lr: 1.0000e-05 - 150ms/epoch - 15ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.01229
10/10 - 0s - loss: 0.0323 - val_loss: 0.0403 - lr: 1.0000e-05 - 148ms/epoch - 15ms/step
Epoch 00057: early stopping
SMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 36.8316918621851 
RMSE:	 6.068911917484476 
MAPE:	 5.0566072358251795

EMA
Prediction vs Close:		55.6% Accuracy
Prediction vs Prediction:	39.93% Accuracy
MSE:	 71.66249742365495 
RMSE:	 8.465370483543822 
MAPE:	 6.945165499596413

WMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	44.4% Accuracy
MSE:	 54.1687035256009 
RMSE:	 7.359939097954609 
MAPE:	 6.023882427496638

DEMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 122.04043856386052 
RMSE:	 11.047191433294731 
MAPE:	 9.094033094452044
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.48 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4190.464, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3724.371, Time=0.07 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.38 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3494.154, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3357.435, Time=0.12 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.56 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.99 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3359.435, Time=0.27 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.999 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1674.717
Date:                Sun, 12 Dec 2021   AIC                           3357.435
Time:                        18:34:19   BIC                           3376.198
Sample:                             0   HQIC                          3364.641
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1955      0.003   -381.246      0.000      -1.202      -1.189
ar.L2         -0.8964      0.007   -135.835      0.000      -0.909      -0.883
ar.L3         -0.3971      0.006    -67.229      0.000      -0.409      -0.385
sigma2         3.7466      0.018    211.623      0.000       3.712       3.781
===================================================================================
Ljung-Box (L1) (Q):                  14.20   Jarque-Bera (JB):           2338363.32
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                             3.76
Prob(H) (two-sided):                  0.00   Kurtosis:                       266.93
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_4 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_4 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.19193, saving model to LSTM1.h5
45/45 - 2s - loss: 0.2960 - val_loss: 0.1919 - lr: 0.0010 - 2s/epoch - 53ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.19193
45/45 - 1s - loss: 0.2009 - val_loss: 0.6974 - lr: 0.0010 - 607ms/epoch - 13ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.19193
45/45 - 1s - loss: 0.0824 - val_loss: 1.4587 - lr: 0.0010 - 560ms/epoch - 12ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.19193
45/45 - 1s - loss: 0.0402 - val_loss: 0.3100 - lr: 0.0010 - 564ms/epoch - 13ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.19193 to 0.14815, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0362 - val_loss: 0.1482 - lr: 0.0010 - 610ms/epoch - 14ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.14815 to 0.11783, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0326 - val_loss: 0.1178 - lr: 0.0010 - 541ms/epoch - 12ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.11783 to 0.09163, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0377 - val_loss: 0.0916 - lr: 0.0010 - 597ms/epoch - 13ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.09163 to 0.09068, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0312 - val_loss: 0.0907 - lr: 0.0010 - 568ms/epoch - 13ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.09068 to 0.05757, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0319 - val_loss: 0.0576 - lr: 0.0010 - 548ms/epoch - 12ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.05757
45/45 - 1s - loss: 0.0309 - val_loss: 0.0827 - lr: 0.0010 - 562ms/epoch - 12ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.05757
45/45 - 1s - loss: 0.0251 - val_loss: 0.0840 - lr: 0.0010 - 524ms/epoch - 12ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.05757 to 0.04565, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0326 - val_loss: 0.0456 - lr: 0.0010 - 607ms/epoch - 13ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.04565 to 0.00916, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0290 - val_loss: 0.0092 - lr: 0.0010 - 577ms/epoch - 13ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0360 - val_loss: 0.0988 - lr: 0.0010 - 537ms/epoch - 12ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0275 - val_loss: 0.0437 - lr: 0.0010 - 551ms/epoch - 12ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0243 - val_loss: 0.0129 - lr: 0.0010 - 525ms/epoch - 12ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0310 - val_loss: 0.0750 - lr: 0.0010 - 524ms/epoch - 12ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00018: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0209 - val_loss: 0.0116 - lr: 0.0010 - 584ms/epoch - 13ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0263 - val_loss: 0.0140 - lr: 1.0000e-04 - 540ms/epoch - 12ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0198 - val_loss: 0.0203 - lr: 1.0000e-04 - 529ms/epoch - 12ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0219 - val_loss: 0.0224 - lr: 1.0000e-04 - 576ms/epoch - 13ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0204 - val_loss: 0.0231 - lr: 1.0000e-04 - 563ms/epoch - 13ms/step
Epoch 23/500

Epoch 00023: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00023: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0212 - val_loss: 0.0203 - lr: 1.0000e-04 - 573ms/epoch - 13ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0180 - val_loss: 0.0204 - lr: 1.0000e-05 - 549ms/epoch - 12ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0195 - val_loss: 0.0203 - lr: 1.0000e-05 - 550ms/epoch - 12ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0198 - val_loss: 0.0202 - lr: 1.0000e-05 - 539ms/epoch - 12ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0204 - val_loss: 0.0206 - lr: 1.0000e-05 - 569ms/epoch - 13ms/step
Epoch 28/500

Epoch 00028: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00028: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0190 - val_loss: 0.0207 - lr: 1.0000e-05 - 557ms/epoch - 12ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0196 - val_loss: 0.0203 - lr: 1.0000e-05 - 558ms/epoch - 12ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0178 - val_loss: 0.0202 - lr: 1.0000e-05 - 554ms/epoch - 12ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0185 - val_loss: 0.0202 - lr: 1.0000e-05 - 554ms/epoch - 12ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0160 - val_loss: 0.0199 - lr: 1.0000e-05 - 565ms/epoch - 13ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0201 - val_loss: 0.0201 - lr: 1.0000e-05 - 552ms/epoch - 12ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0180 - val_loss: 0.0199 - lr: 1.0000e-05 - 592ms/epoch - 13ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0194 - val_loss: 0.0197 - lr: 1.0000e-05 - 529ms/epoch - 12ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0189 - val_loss: 0.0198 - lr: 1.0000e-05 - 570ms/epoch - 13ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0195 - val_loss: 0.0196 - lr: 1.0000e-05 - 545ms/epoch - 12ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0175 - val_loss: 0.0198 - lr: 1.0000e-05 - 570ms/epoch - 13ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0195 - val_loss: 0.0198 - lr: 1.0000e-05 - 542ms/epoch - 12ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0198 - val_loss: 0.0202 - lr: 1.0000e-05 - 561ms/epoch - 12ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0190 - val_loss: 0.0204 - lr: 1.0000e-05 - 525ms/epoch - 12ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0168 - val_loss: 0.0206 - lr: 1.0000e-05 - 532ms/epoch - 12ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0180 - val_loss: 0.0204 - lr: 1.0000e-05 - 523ms/epoch - 12ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0182 - val_loss: 0.0202 - lr: 1.0000e-05 - 572ms/epoch - 13ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0177 - val_loss: 0.0202 - lr: 1.0000e-05 - 538ms/epoch - 12ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0196 - val_loss: 0.0204 - lr: 1.0000e-05 - 535ms/epoch - 12ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0194 - val_loss: 0.0205 - lr: 1.0000e-05 - 579ms/epoch - 13ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0190 - val_loss: 0.0203 - lr: 1.0000e-05 - 559ms/epoch - 12ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0183 - val_loss: 0.0203 - lr: 1.0000e-05 - 563ms/epoch - 13ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0187 - val_loss: 0.0202 - lr: 1.0000e-05 - 575ms/epoch - 13ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0199 - val_loss: 0.0206 - lr: 1.0000e-05 - 534ms/epoch - 12ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0184 - val_loss: 0.0207 - lr: 1.0000e-05 - 531ms/epoch - 12ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0185 - val_loss: 0.0206 - lr: 1.0000e-05 - 565ms/epoch - 13ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0193 - val_loss: 0.0207 - lr: 1.0000e-05 - 528ms/epoch - 12ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0175 - val_loss: 0.0215 - lr: 1.0000e-05 - 578ms/epoch - 13ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0189 - val_loss: 0.0219 - lr: 1.0000e-05 - 556ms/epoch - 12ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0186 - val_loss: 0.0215 - lr: 1.0000e-05 - 558ms/epoch - 12ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0182 - val_loss: 0.0213 - lr: 1.0000e-05 - 595ms/epoch - 13ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0176 - val_loss: 0.0207 - lr: 1.0000e-05 - 582ms/epoch - 13ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0192 - val_loss: 0.0210 - lr: 1.0000e-05 - 559ms/epoch - 12ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0175 - val_loss: 0.0206 - lr: 1.0000e-05 - 566ms/epoch - 13ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0178 - val_loss: 0.0205 - lr: 1.0000e-05 - 570ms/epoch - 13ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00916
45/45 - 1s - loss: 0.0166 - val_loss: 0.0200 - lr: 1.0000e-05 - 525ms/epoch - 12ms/step
Epoch 00063: early stopping
SMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 36.8316918621851 
RMSE:	 6.068911917484476 
MAPE:	 5.0566072358251795

EMA
Prediction vs Close:		55.6% Accuracy
Prediction vs Prediction:	39.93% Accuracy
MSE:	 71.66249742365495 
RMSE:	 8.465370483543822 
MAPE:	 6.945165499596413

WMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	44.4% Accuracy
MSE:	 54.1687035256009 
RMSE:	 7.359939097954609 
MAPE:	 6.023882427496638

DEMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 122.04043856386052 
RMSE:	 11.047191433294731 
MAPE:	 9.094033094452044

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 45.6842728316542 
RMSE:	 6.759014190816157 
MAPE:	 5.403969846039035
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.48 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4212.289, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3747.746, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.32 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3523.401, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3387.759, Time=0.15 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.58 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.11 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3389.758, Time=0.27 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.097 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1689.879
Date:                Sun, 12 Dec 2021   AIC                           3387.759
Time:                        18:36:24   BIC                           3406.522
Sample:                             0   HQIC                          3394.964
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1878      0.003   -345.315      0.000      -1.195      -1.181
ar.L2         -0.8876      0.007   -121.809      0.000      -0.902      -0.873
ar.L3         -0.3957      0.007    -60.127      0.000      -0.409      -0.383
sigma2         3.8904      0.020    193.404      0.000       3.851       3.930
===================================================================================
Ljung-Box (L1) (Q):                  13.21   Jarque-Bera (JB):           1659080.01
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.08   Skew:                             3.28
Prob(H) (two-sided):                  0.00   Kurtosis:                       225.31
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_5 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_5 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.06465, saving model to LSTM1.h5
58/58 - 3s - loss: 0.4628 - val_loss: 0.0646 - lr: 0.0010 - 3s/epoch - 45ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.06465
58/58 - 1s - loss: 0.1285 - val_loss: 0.0952 - lr: 0.0010 - 700ms/epoch - 12ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.06465
58/58 - 1s - loss: 0.0685 - val_loss: 0.8589 - lr: 0.0010 - 690ms/epoch - 12ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.06465 to 0.05724, saving model to LSTM1.h5
58/58 - 1s - loss: 0.0662 - val_loss: 0.0572 - lr: 0.0010 - 748ms/epoch - 13ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.05724 to 0.00650, saving model to LSTM1.h5
58/58 - 1s - loss: 0.0514 - val_loss: 0.0065 - lr: 0.0010 - 750ms/epoch - 13ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0382 - val_loss: 0.3004 - lr: 0.0010 - 686ms/epoch - 12ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0343 - val_loss: 0.1529 - lr: 0.0010 - 680ms/epoch - 12ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0360 - val_loss: 0.0509 - lr: 0.0010 - 704ms/epoch - 12ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0343 - val_loss: 0.0268 - lr: 0.0010 - 680ms/epoch - 12ms/step
Epoch 10/500

Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00010: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0345 - val_loss: 0.0105 - lr: 0.0010 - 688ms/epoch - 12ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0348 - val_loss: 0.0123 - lr: 1.0000e-04 - 704ms/epoch - 12ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0287 - val_loss: 0.0143 - lr: 1.0000e-04 - 744ms/epoch - 13ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0278 - val_loss: 0.0157 - lr: 1.0000e-04 - 684ms/epoch - 12ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0272 - val_loss: 0.0153 - lr: 1.0000e-04 - 703ms/epoch - 12ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00015: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0270 - val_loss: 0.0161 - lr: 1.0000e-04 - 694ms/epoch - 12ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0275 - val_loss: 0.0161 - lr: 1.0000e-05 - 738ms/epoch - 13ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0268 - val_loss: 0.0159 - lr: 1.0000e-05 - 690ms/epoch - 12ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0255 - val_loss: 0.0161 - lr: 1.0000e-05 - 710ms/epoch - 12ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0281 - val_loss: 0.0160 - lr: 1.0000e-05 - 723ms/epoch - 12ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00020: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0264 - val_loss: 0.0161 - lr: 1.0000e-05 - 723ms/epoch - 12ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0256 - val_loss: 0.0163 - lr: 1.0000e-05 - 705ms/epoch - 12ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0288 - val_loss: 0.0169 - lr: 1.0000e-05 - 738ms/epoch - 13ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0276 - val_loss: 0.0170 - lr: 1.0000e-05 - 745ms/epoch - 13ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0268 - val_loss: 0.0175 - lr: 1.0000e-05 - 695ms/epoch - 12ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0310 - val_loss: 0.0174 - lr: 1.0000e-05 - 678ms/epoch - 12ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0268 - val_loss: 0.0180 - lr: 1.0000e-05 - 707ms/epoch - 12ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0286 - val_loss: 0.0182 - lr: 1.0000e-05 - 722ms/epoch - 12ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0280 - val_loss: 0.0182 - lr: 1.0000e-05 - 673ms/epoch - 12ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0277 - val_loss: 0.0187 - lr: 1.0000e-05 - 686ms/epoch - 12ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0275 - val_loss: 0.0191 - lr: 1.0000e-05 - 708ms/epoch - 12ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0292 - val_loss: 0.0189 - lr: 1.0000e-05 - 663ms/epoch - 11ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0256 - val_loss: 0.0189 - lr: 1.0000e-05 - 706ms/epoch - 12ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0255 - val_loss: 0.0186 - lr: 1.0000e-05 - 687ms/epoch - 12ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0253 - val_loss: 0.0183 - lr: 1.0000e-05 - 718ms/epoch - 12ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0251 - val_loss: 0.0179 - lr: 1.0000e-05 - 659ms/epoch - 11ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0266 - val_loss: 0.0174 - lr: 1.0000e-05 - 709ms/epoch - 12ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0261 - val_loss: 0.0173 - lr: 1.0000e-05 - 734ms/epoch - 13ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0264 - val_loss: 0.0178 - lr: 1.0000e-05 - 686ms/epoch - 12ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0283 - val_loss: 0.0177 - lr: 1.0000e-05 - 697ms/epoch - 12ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0244 - val_loss: 0.0178 - lr: 1.0000e-05 - 693ms/epoch - 12ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0257 - val_loss: 0.0171 - lr: 1.0000e-05 - 721ms/epoch - 12ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0276 - val_loss: 0.0168 - lr: 1.0000e-05 - 675ms/epoch - 12ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0265 - val_loss: 0.0165 - lr: 1.0000e-05 - 719ms/epoch - 12ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0262 - val_loss: 0.0166 - lr: 1.0000e-05 - 651ms/epoch - 11ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0261 - val_loss: 0.0160 - lr: 1.0000e-05 - 652ms/epoch - 11ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0265 - val_loss: 0.0156 - lr: 1.0000e-05 - 682ms/epoch - 12ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0256 - val_loss: 0.0156 - lr: 1.0000e-05 - 657ms/epoch - 11ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0242 - val_loss: 0.0155 - lr: 1.0000e-05 - 680ms/epoch - 12ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0244 - val_loss: 0.0152 - lr: 1.0000e-05 - 687ms/epoch - 12ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0245 - val_loss: 0.0150 - lr: 1.0000e-05 - 706ms/epoch - 12ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0268 - val_loss: 0.0149 - lr: 1.0000e-05 - 699ms/epoch - 12ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0259 - val_loss: 0.0148 - lr: 1.0000e-05 - 707ms/epoch - 12ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0257 - val_loss: 0.0151 - lr: 1.0000e-05 - 723ms/epoch - 12ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0272 - val_loss: 0.0153 - lr: 1.0000e-05 - 772ms/epoch - 13ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00650
58/58 - 1s - loss: 0.0279 - val_loss: 0.0153 - lr: 1.0000e-05 - 667ms/epoch - 11ms/step
Epoch 00055: early stopping
SMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 36.8316918621851 
RMSE:	 6.068911917484476 
MAPE:	 5.0566072358251795

EMA
Prediction vs Close:		55.6% Accuracy
Prediction vs Prediction:	39.93% Accuracy
MSE:	 71.66249742365495 
RMSE:	 8.465370483543822 
MAPE:	 6.945165499596413

WMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	44.4% Accuracy
MSE:	 54.1687035256009 
RMSE:	 7.359939097954609 
MAPE:	 6.023882427496638

DEMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 122.04043856386052 
RMSE:	 11.047191433294731 
MAPE:	 9.094033094452044

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 45.6842728316542 
RMSE:	 6.759014190816157 
MAPE:	 5.403969846039035

MIDPOINT
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 48.516945521896595 
RMSE:	 6.965410649911217 
MAPE:	 5.700028346794376
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.46 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4414.515, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3944.062, Time=0.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.51 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3715.173, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3577.471, Time=0.10 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.83 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.75 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3579.471, Time=0.25 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.092 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1784.736
Date:                Sun, 12 Dec 2021   AIC                           3577.471
Time:                        18:38:36   BIC                           3596.235
Sample:                             0   HQIC                          3584.677
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.844      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.861      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.862      0.000      -0.410      -0.387
sigma2         4.9242      0.023    215.469      0.000       4.879       4.969
===================================================================================
Ljung-Box (L1) (Q):                  14.55   Jarque-Bera (JB):           2468024.38
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       274.15
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_6 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_6 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.03019, saving model to LSTM1.h5
43/43 - 2s - loss: 0.2376 - val_loss: 0.0302 - lr: 0.0010 - 2s/epoch - 55ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.03019
43/43 - 1s - loss: 0.0957 - val_loss: 0.0381 - lr: 0.0010 - 556ms/epoch - 13ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.03019
43/43 - 1s - loss: 0.0784 - val_loss: 0.4512 - lr: 0.0010 - 519ms/epoch - 12ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.03019
43/43 - 1s - loss: 0.0599 - val_loss: 0.0439 - lr: 0.0010 - 569ms/epoch - 13ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.03019
43/43 - 0s - loss: 0.0478 - val_loss: 0.1065 - lr: 0.0010 - 490ms/epoch - 11ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.03019
43/43 - 1s - loss: 0.0452 - val_loss: 0.0366 - lr: 0.0010 - 534ms/epoch - 12ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.03019 to 0.02559, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0469 - val_loss: 0.0256 - lr: 1.0000e-04 - 583ms/epoch - 14ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.02559 to 0.01815, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0343 - val_loss: 0.0182 - lr: 1.0000e-04 - 582ms/epoch - 14ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.01815 to 0.01555, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0326 - val_loss: 0.0156 - lr: 1.0000e-04 - 563ms/epoch - 13ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.01555
43/43 - 1s - loss: 0.0353 - val_loss: 0.0160 - lr: 1.0000e-04 - 516ms/epoch - 12ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.01555 to 0.01466, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0314 - val_loss: 0.0147 - lr: 1.0000e-04 - 607ms/epoch - 14ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.01466 to 0.01125, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0359 - val_loss: 0.0112 - lr: 1.0000e-04 - 565ms/epoch - 13ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.01125 to 0.01048, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0324 - val_loss: 0.0105 - lr: 1.0000e-04 - 631ms/epoch - 15ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.01048 to 0.00652, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0331 - val_loss: 0.0065 - lr: 1.0000e-04 - 588ms/epoch - 14ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0357 - val_loss: 0.0067 - lr: 1.0000e-04 - 558ms/epoch - 13ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0323 - val_loss: 0.0074 - lr: 1.0000e-04 - 554ms/epoch - 13ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0325 - val_loss: 0.0095 - lr: 1.0000e-04 - 546ms/epoch - 13ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0339 - val_loss: 0.0079 - lr: 1.0000e-04 - 562ms/epoch - 13ms/step
Epoch 19/500

Epoch 00019: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00019: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0336 - val_loss: 0.0088 - lr: 1.0000e-04 - 539ms/epoch - 13ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0344 - val_loss: 0.0086 - lr: 1.0000e-05 - 575ms/epoch - 13ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0308 - val_loss: 0.0082 - lr: 1.0000e-05 - 580ms/epoch - 13ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0309 - val_loss: 0.0081 - lr: 1.0000e-05 - 566ms/epoch - 13ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0324 - val_loss: 0.0084 - lr: 1.0000e-05 - 569ms/epoch - 13ms/step
Epoch 24/500

Epoch 00024: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00024: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0351 - val_loss: 0.0084 - lr: 1.0000e-05 - 582ms/epoch - 14ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0340 - val_loss: 0.0083 - lr: 1.0000e-05 - 554ms/epoch - 13ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0334 - val_loss: 0.0078 - lr: 1.0000e-05 - 533ms/epoch - 12ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0340 - val_loss: 0.0075 - lr: 1.0000e-05 - 542ms/epoch - 13ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0319 - val_loss: 0.0075 - lr: 1.0000e-05 - 557ms/epoch - 13ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0365 - val_loss: 0.0077 - lr: 1.0000e-05 - 547ms/epoch - 13ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0324 - val_loss: 0.0077 - lr: 1.0000e-05 - 552ms/epoch - 13ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0309 - val_loss: 0.0078 - lr: 1.0000e-05 - 561ms/epoch - 13ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0326 - val_loss: 0.0074 - lr: 1.0000e-05 - 581ms/epoch - 14ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0316 - val_loss: 0.0071 - lr: 1.0000e-05 - 526ms/epoch - 12ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0342 - val_loss: 0.0067 - lr: 1.0000e-05 - 600ms/epoch - 14ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0329 - val_loss: 0.0066 - lr: 1.0000e-05 - 590ms/epoch - 14ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0298 - val_loss: 0.0067 - lr: 1.0000e-05 - 539ms/epoch - 13ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0282 - val_loss: 0.0068 - lr: 1.0000e-05 - 542ms/epoch - 13ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0292 - val_loss: 0.0070 - lr: 1.0000e-05 - 539ms/epoch - 13ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0313 - val_loss: 0.0070 - lr: 1.0000e-05 - 530ms/epoch - 12ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0320 - val_loss: 0.0068 - lr: 1.0000e-05 - 571ms/epoch - 13ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0316 - val_loss: 0.0070 - lr: 1.0000e-05 - 583ms/epoch - 14ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0334 - val_loss: 0.0072 - lr: 1.0000e-05 - 529ms/epoch - 12ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0299 - val_loss: 0.0074 - lr: 1.0000e-05 - 559ms/epoch - 13ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0314 - val_loss: 0.0072 - lr: 1.0000e-05 - 549ms/epoch - 13ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0336 - val_loss: 0.0069 - lr: 1.0000e-05 - 542ms/epoch - 13ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0305 - val_loss: 0.0068 - lr: 1.0000e-05 - 526ms/epoch - 12ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0315 - val_loss: 0.0069 - lr: 1.0000e-05 - 561ms/epoch - 13ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0308 - val_loss: 0.0067 - lr: 1.0000e-05 - 599ms/epoch - 14ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0326 - val_loss: 0.0067 - lr: 1.0000e-05 - 515ms/epoch - 12ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0296 - val_loss: 0.0068 - lr: 1.0000e-05 - 565ms/epoch - 13ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0310 - val_loss: 0.0067 - lr: 1.0000e-05 - 550ms/epoch - 13ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00652
43/43 - 1s - loss: 0.0315 - val_loss: 0.0065 - lr: 1.0000e-05 - 517ms/epoch - 12ms/step
Epoch 53/500

Epoch 00053: val_loss improved from 0.00652 to 0.00626, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0303 - val_loss: 0.0063 - lr: 1.0000e-05 - 780ms/epoch - 18ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00626
43/43 - 1s - loss: 0.0323 - val_loss: 0.0063 - lr: 1.0000e-05 - 552ms/epoch - 13ms/step
Epoch 55/500

Epoch 00055: val_loss improved from 0.00626 to 0.00594, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0347 - val_loss: 0.0059 - lr: 1.0000e-05 - 582ms/epoch - 14ms/step
Epoch 56/500

Epoch 00056: val_loss improved from 0.00594 to 0.00579, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0346 - val_loss: 0.0058 - lr: 1.0000e-05 - 614ms/epoch - 14ms/step
Epoch 57/500

Epoch 00057: val_loss improved from 0.00579 to 0.00571, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0318 - val_loss: 0.0057 - lr: 1.0000e-05 - 564ms/epoch - 13ms/step
Epoch 58/500

Epoch 00058: val_loss improved from 0.00571 to 0.00552, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0310 - val_loss: 0.0055 - lr: 1.0000e-05 - 593ms/epoch - 14ms/step
Epoch 59/500

Epoch 00059: val_loss improved from 0.00552 to 0.00510, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0332 - val_loss: 0.0051 - lr: 1.0000e-05 - 559ms/epoch - 13ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00510
43/43 - 1s - loss: 0.0291 - val_loss: 0.0052 - lr: 1.0000e-05 - 535ms/epoch - 12ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00510
43/43 - 1s - loss: 0.0338 - val_loss: 0.0056 - lr: 1.0000e-05 - 561ms/epoch - 13ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00510
43/43 - 1s - loss: 0.0332 - val_loss: 0.0054 - lr: 1.0000e-05 - 580ms/epoch - 13ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00510
43/43 - 1s - loss: 0.0330 - val_loss: 0.0054 - lr: 1.0000e-05 - 544ms/epoch - 13ms/step
Epoch 64/500

Epoch 00064: val_loss improved from 0.00510 to 0.00505, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0311 - val_loss: 0.0050 - lr: 1.0000e-05 - 534ms/epoch - 12ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.00505
43/43 - 1s - loss: 0.0307 - val_loss: 0.0051 - lr: 1.0000e-05 - 546ms/epoch - 13ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.00505
43/43 - 1s - loss: 0.0331 - val_loss: 0.0052 - lr: 1.0000e-05 - 503ms/epoch - 12ms/step
Epoch 67/500

Epoch 00067: val_loss improved from 0.00505 to 0.00499, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0291 - val_loss: 0.0050 - lr: 1.0000e-05 - 572ms/epoch - 13ms/step
Epoch 68/500

Epoch 00068: val_loss improved from 0.00499 to 0.00485, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0292 - val_loss: 0.0049 - lr: 1.0000e-05 - 551ms/epoch - 13ms/step
Epoch 69/500

Epoch 00069: val_loss improved from 0.00485 to 0.00465, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0318 - val_loss: 0.0047 - lr: 1.0000e-05 - 577ms/epoch - 13ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.00465
43/43 - 1s - loss: 0.0330 - val_loss: 0.0047 - lr: 1.0000e-05 - 508ms/epoch - 12ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.00465
43/43 - 1s - loss: 0.0289 - val_loss: 0.0048 - lr: 1.0000e-05 - 567ms/epoch - 13ms/step
Epoch 72/500

Epoch 00072: val_loss improved from 0.00465 to 0.00464, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0339 - val_loss: 0.0046 - lr: 1.0000e-05 - 561ms/epoch - 13ms/step
Epoch 73/500

Epoch 00073: val_loss improved from 0.00464 to 0.00446, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0283 - val_loss: 0.0045 - lr: 1.0000e-05 - 599ms/epoch - 14ms/step
Epoch 74/500

Epoch 00074: val_loss improved from 0.00446 to 0.00439, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0298 - val_loss: 0.0044 - lr: 1.0000e-05 - 591ms/epoch - 14ms/step
Epoch 75/500

Epoch 00075: val_loss improved from 0.00439 to 0.00436, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0302 - val_loss: 0.0044 - lr: 1.0000e-05 - 614ms/epoch - 14ms/step
Epoch 76/500

Epoch 00076: val_loss improved from 0.00436 to 0.00432, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0320 - val_loss: 0.0043 - lr: 1.0000e-05 - 574ms/epoch - 13ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.00432
43/43 - 1s - loss: 0.0308 - val_loss: 0.0044 - lr: 1.0000e-05 - 507ms/epoch - 12ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.00432
43/43 - 1s - loss: 0.0302 - val_loss: 0.0043 - lr: 1.0000e-05 - 572ms/epoch - 13ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.00432
43/43 - 1s - loss: 0.0306 - val_loss: 0.0044 - lr: 1.0000e-05 - 522ms/epoch - 12ms/step
Epoch 80/500

Epoch 00080: val_loss did not improve from 0.00432
43/43 - 1s - loss: 0.0298 - val_loss: 0.0044 - lr: 1.0000e-05 - 568ms/epoch - 13ms/step
Epoch 81/500

Epoch 00081: val_loss improved from 0.00432 to 0.00432, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0318 - val_loss: 0.0043 - lr: 1.0000e-05 - 541ms/epoch - 13ms/step
Epoch 82/500

Epoch 00082: val_loss improved from 0.00432 to 0.00428, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0287 - val_loss: 0.0043 - lr: 1.0000e-05 - 535ms/epoch - 12ms/step
Epoch 83/500

Epoch 00083: val_loss improved from 0.00428 to 0.00422, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0328 - val_loss: 0.0042 - lr: 1.0000e-05 - 584ms/epoch - 14ms/step
Epoch 84/500

Epoch 00084: val_loss improved from 0.00422 to 0.00420, saving model to LSTM1.h5
43/43 - 1s - loss: 0.0321 - val_loss: 0.0042 - lr: 1.0000e-05 - 572ms/epoch - 13ms/step
Epoch 85/500

Epoch 00085: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0280 - val_loss: 0.0042 - lr: 1.0000e-05 - 563ms/epoch - 13ms/step
Epoch 86/500

Epoch 00086: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0305 - val_loss: 0.0043 - lr: 1.0000e-05 - 540ms/epoch - 13ms/step
Epoch 87/500

Epoch 00087: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0291 - val_loss: 0.0044 - lr: 1.0000e-05 - 577ms/epoch - 13ms/step
Epoch 88/500

Epoch 00088: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0301 - val_loss: 0.0044 - lr: 1.0000e-05 - 550ms/epoch - 13ms/step
Epoch 89/500

Epoch 00089: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0315 - val_loss: 0.0046 - lr: 1.0000e-05 - 545ms/epoch - 13ms/step
Epoch 90/500

Epoch 00090: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0308 - val_loss: 0.0044 - lr: 1.0000e-05 - 550ms/epoch - 13ms/step
Epoch 91/500

Epoch 00091: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0302 - val_loss: 0.0044 - lr: 1.0000e-05 - 562ms/epoch - 13ms/step
Epoch 92/500

Epoch 00092: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0293 - val_loss: 0.0043 - lr: 1.0000e-05 - 504ms/epoch - 12ms/step
Epoch 93/500

Epoch 00093: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0303 - val_loss: 0.0044 - lr: 1.0000e-05 - 529ms/epoch - 12ms/step
Epoch 94/500

Epoch 00094: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0299 - val_loss: 0.0044 - lr: 1.0000e-05 - 564ms/epoch - 13ms/step
Epoch 95/500

Epoch 00095: val_loss did not improve from 0.00420
43/43 - 0s - loss: 0.0272 - val_loss: 0.0044 - lr: 1.0000e-05 - 497ms/epoch - 12ms/step
Epoch 96/500

Epoch 00096: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0294 - val_loss: 0.0045 - lr: 1.0000e-05 - 579ms/epoch - 13ms/step
Epoch 97/500

Epoch 00097: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0302 - val_loss: 0.0045 - lr: 1.0000e-05 - 541ms/epoch - 13ms/step
Epoch 98/500

Epoch 00098: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0283 - val_loss: 0.0045 - lr: 1.0000e-05 - 612ms/epoch - 14ms/step
Epoch 99/500

Epoch 00099: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0310 - val_loss: 0.0047 - lr: 1.0000e-05 - 577ms/epoch - 13ms/step
Epoch 100/500

Epoch 00100: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0294 - val_loss: 0.0046 - lr: 1.0000e-05 - 561ms/epoch - 13ms/step
Epoch 101/500

Epoch 00101: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0283 - val_loss: 0.0047 - lr: 1.0000e-05 - 544ms/epoch - 13ms/step
Epoch 102/500

Epoch 00102: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0279 - val_loss: 0.0047 - lr: 1.0000e-05 - 545ms/epoch - 13ms/step
Epoch 103/500

Epoch 00103: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0298 - val_loss: 0.0050 - lr: 1.0000e-05 - 535ms/epoch - 12ms/step
Epoch 104/500

Epoch 00104: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0267 - val_loss: 0.0053 - lr: 1.0000e-05 - 527ms/epoch - 12ms/step
Epoch 105/500

Epoch 00105: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0291 - val_loss: 0.0057 - lr: 1.0000e-05 - 554ms/epoch - 13ms/step
Epoch 106/500

Epoch 00106: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0298 - val_loss: 0.0054 - lr: 1.0000e-05 - 513ms/epoch - 12ms/step
Epoch 107/500

Epoch 00107: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0281 - val_loss: 0.0053 - lr: 1.0000e-05 - 555ms/epoch - 13ms/step
Epoch 108/500

Epoch 00108: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0288 - val_loss: 0.0054 - lr: 1.0000e-05 - 540ms/epoch - 13ms/step
Epoch 109/500

Epoch 00109: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0292 - val_loss: 0.0052 - lr: 1.0000e-05 - 559ms/epoch - 13ms/step
Epoch 110/500

Epoch 00110: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0302 - val_loss: 0.0052 - lr: 1.0000e-05 - 535ms/epoch - 12ms/step
Epoch 111/500

Epoch 00111: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0292 - val_loss: 0.0055 - lr: 1.0000e-05 - 541ms/epoch - 13ms/step
Epoch 112/500

Epoch 00112: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0307 - val_loss: 0.0056 - lr: 1.0000e-05 - 506ms/epoch - 12ms/step
Epoch 113/500

Epoch 00113: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0292 - val_loss: 0.0058 - lr: 1.0000e-05 - 540ms/epoch - 13ms/step
Epoch 114/500

Epoch 00114: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0303 - val_loss: 0.0060 - lr: 1.0000e-05 - 565ms/epoch - 13ms/step
Epoch 115/500

Epoch 00115: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0323 - val_loss: 0.0063 - lr: 1.0000e-05 - 580ms/epoch - 13ms/step
Epoch 116/500

Epoch 00116: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0305 - val_loss: 0.0063 - lr: 1.0000e-05 - 565ms/epoch - 13ms/step
Epoch 117/500

Epoch 00117: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0292 - val_loss: 0.0060 - lr: 1.0000e-05 - 627ms/epoch - 15ms/step
Epoch 118/500

Epoch 00118: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0295 - val_loss: 0.0065 - lr: 1.0000e-05 - 545ms/epoch - 13ms/step
Epoch 119/500

Epoch 00119: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0286 - val_loss: 0.0063 - lr: 1.0000e-05 - 556ms/epoch - 13ms/step
Epoch 120/500

Epoch 00120: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0320 - val_loss: 0.0061 - lr: 1.0000e-05 - 580ms/epoch - 13ms/step
Epoch 121/500

Epoch 00121: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0275 - val_loss: 0.0062 - lr: 1.0000e-05 - 556ms/epoch - 13ms/step
Epoch 122/500

Epoch 00122: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0272 - val_loss: 0.0062 - lr: 1.0000e-05 - 571ms/epoch - 13ms/step
Epoch 123/500

Epoch 00123: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0283 - val_loss: 0.0057 - lr: 1.0000e-05 - 522ms/epoch - 12ms/step
Epoch 124/500

Epoch 00124: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0287 - val_loss: 0.0058 - lr: 1.0000e-05 - 547ms/epoch - 13ms/step
Epoch 125/500

Epoch 00125: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0275 - val_loss: 0.0056 - lr: 1.0000e-05 - 549ms/epoch - 13ms/step
Epoch 126/500

Epoch 00126: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0265 - val_loss: 0.0055 - lr: 1.0000e-05 - 531ms/epoch - 12ms/step
Epoch 127/500

Epoch 00127: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0275 - val_loss: 0.0055 - lr: 1.0000e-05 - 522ms/epoch - 12ms/step
Epoch 128/500

Epoch 00128: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0286 - val_loss: 0.0062 - lr: 1.0000e-05 - 569ms/epoch - 13ms/step
Epoch 129/500

Epoch 00129: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0291 - val_loss: 0.0066 - lr: 1.0000e-05 - 529ms/epoch - 12ms/step
Epoch 130/500

Epoch 00130: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0269 - val_loss: 0.0062 - lr: 1.0000e-05 - 580ms/epoch - 13ms/step
Epoch 131/500

Epoch 00131: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0264 - val_loss: 0.0060 - lr: 1.0000e-05 - 551ms/epoch - 13ms/step
Epoch 132/500

Epoch 00132: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0259 - val_loss: 0.0056 - lr: 1.0000e-05 - 508ms/epoch - 12ms/step
Epoch 133/500

Epoch 00133: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0311 - val_loss: 0.0055 - lr: 1.0000e-05 - 594ms/epoch - 14ms/step
Epoch 134/500

Epoch 00134: val_loss did not improve from 0.00420
43/43 - 1s - loss: 0.0251 - val_loss: 0.0055 - lr: 1.0000e-05 - 544ms/epoch - 13ms/step
Epoch 00134: early stopping
SMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 36.8316918621851 
RMSE:	 6.068911917484476 
MAPE:	 5.0566072358251795

EMA
Prediction vs Close:		55.6% Accuracy
Prediction vs Prediction:	39.93% Accuracy
MSE:	 71.66249742365495 
RMSE:	 8.465370483543822 
MAPE:	 6.945165499596413

WMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	44.4% Accuracy
MSE:	 54.1687035256009 
RMSE:	 7.359939097954609 
MAPE:	 6.023882427496638

DEMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 122.04043856386052 
RMSE:	 11.047191433294731 
MAPE:	 9.094033094452044

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 45.6842728316542 
RMSE:	 6.759014190816157 
MAPE:	 5.403969846039035

MIDPOINT
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 48.516945521896595 
RMSE:	 6.965410649911217 
MAPE:	 5.700028346794376

T3
Prediction vs Close:		55.6% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 196.53845131752638 
RMSE:	 14.019217214863545 
MAPE:	 11.19411588232732
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.65 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4352.703, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3889.412, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.34 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3689.930, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3574.245, Time=0.10 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.53 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.01 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3576.245, Time=0.23 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.049 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1783.123
Date:                Sun, 12 Dec 2021   AIC                           3574.245
Time:                        18:41:21   BIC                           3593.008
Sample:                             0   HQIC                          3581.451
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1480      0.004   -302.430      0.000      -1.155      -1.141
ar.L2         -0.8300      0.008    -99.682      0.000      -0.846      -0.814
ar.L3         -0.3687      0.007    -50.527      0.000      -0.383      -0.354
sigma2         4.9055      0.028    175.970      0.000       4.851       4.960
===================================================================================
Ljung-Box (L1) (Q):                  11.61   Jarque-Bera (JB):           1261976.58
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.16   Skew:                             2.52
Prob(H) (two-sided):                  0.00   Kurtosis:                       196.90
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_7 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_7 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.00776, saving model to LSTM1.h5
90/90 - 3s - loss: 0.1544 - val_loss: 0.0078 - lr: 0.0010 - 3s/epoch - 32ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.00776
90/90 - 1s - loss: 0.1043 - val_loss: 0.1798 - lr: 0.0010 - 1s/epoch - 13ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.00776
90/90 - 1s - loss: 0.0682 - val_loss: 0.3403 - lr: 0.0010 - 1s/epoch - 13ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.00776 to 0.00637, saving model to LSTM1.h5
90/90 - 1s - loss: 0.0693 - val_loss: 0.0064 - lr: 0.0010 - 1s/epoch - 13ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00637
90/90 - 1s - loss: 0.0427 - val_loss: 0.1361 - lr: 0.0010 - 1s/epoch - 12ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.00637 to 0.00614, saving model to LSTM1.h5
90/90 - 1s - loss: 0.0318 - val_loss: 0.0061 - lr: 0.0010 - 1s/epoch - 12ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0274 - val_loss: 0.0884 - lr: 0.0010 - 1s/epoch - 11ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0323 - val_loss: 0.0072 - lr: 0.0010 - 1s/epoch - 13ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0335 - val_loss: 0.0974 - lr: 0.0010 - 1s/epoch - 12ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0340 - val_loss: 0.0355 - lr: 0.0010 - 1s/epoch - 11ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00011: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0304 - val_loss: 0.4757 - lr: 0.0010 - 1s/epoch - 11ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0294 - val_loss: 0.4299 - lr: 1.0000e-04 - 1s/epoch - 11ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0265 - val_loss: 0.3675 - lr: 1.0000e-04 - 1s/epoch - 13ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0267 - val_loss: 0.3130 - lr: 1.0000e-04 - 1s/epoch - 12ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0222 - val_loss: 0.2626 - lr: 1.0000e-04 - 1s/epoch - 11ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00016: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0241 - val_loss: 0.2143 - lr: 1.0000e-04 - 996ms/epoch - 11ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0229 - val_loss: 0.2101 - lr: 1.0000e-05 - 1s/epoch - 11ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0231 - val_loss: 0.2060 - lr: 1.0000e-05 - 1s/epoch - 13ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0225 - val_loss: 0.2023 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0232 - val_loss: 0.1978 - lr: 1.0000e-05 - 1s/epoch - 11ms/step
Epoch 21/500

Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00021: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0234 - val_loss: 0.1938 - lr: 1.0000e-05 - 1s/epoch - 13ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0245 - val_loss: 0.1901 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0216 - val_loss: 0.1861 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0244 - val_loss: 0.1820 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0254 - val_loss: 0.1780 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0212 - val_loss: 0.1739 - lr: 1.0000e-05 - 1s/epoch - 11ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0230 - val_loss: 0.1697 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0244 - val_loss: 0.1656 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0208 - val_loss: 0.1620 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0220 - val_loss: 0.1579 - lr: 1.0000e-05 - 1s/epoch - 13ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0205 - val_loss: 0.1545 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0218 - val_loss: 0.1501 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0245 - val_loss: 0.1456 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0212 - val_loss: 0.1420 - lr: 1.0000e-05 - 1s/epoch - 11ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0217 - val_loss: 0.1375 - lr: 1.0000e-05 - 991ms/epoch - 11ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0225 - val_loss: 0.1329 - lr: 1.0000e-05 - 997ms/epoch - 11ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0216 - val_loss: 0.1296 - lr: 1.0000e-05 - 1s/epoch - 11ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0208 - val_loss: 0.1264 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0209 - val_loss: 0.1226 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0197 - val_loss: 0.1191 - lr: 1.0000e-05 - 1s/epoch - 13ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0211 - val_loss: 0.1158 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0205 - val_loss: 0.1126 - lr: 1.0000e-05 - 1s/epoch - 11ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0228 - val_loss: 0.1081 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0214 - val_loss: 0.1045 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0218 - val_loss: 0.1012 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0201 - val_loss: 0.0989 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0191 - val_loss: 0.0967 - lr: 1.0000e-05 - 1s/epoch - 11ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0235 - val_loss: 0.0937 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0216 - val_loss: 0.0906 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0200 - val_loss: 0.0893 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0187 - val_loss: 0.0877 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0208 - val_loss: 0.0864 - lr: 1.0000e-05 - 1s/epoch - 11ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0216 - val_loss: 0.0844 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0186 - val_loss: 0.0819 - lr: 1.0000e-05 - 1s/epoch - 11ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0186 - val_loss: 0.0787 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00614
90/90 - 1s - loss: 0.0201 - val_loss: 0.0767 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 00056: early stopping
SMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 36.8316918621851 
RMSE:	 6.068911917484476 
MAPE:	 5.0566072358251795

EMA
Prediction vs Close:		55.6% Accuracy
Prediction vs Prediction:	39.93% Accuracy
MSE:	 71.66249742365495 
RMSE:	 8.465370483543822 
MAPE:	 6.945165499596413

WMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	44.4% Accuracy
MSE:	 54.1687035256009 
RMSE:	 7.359939097954609 
MAPE:	 6.023882427496638

DEMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 122.04043856386052 
RMSE:	 11.047191433294731 
MAPE:	 9.094033094452044

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 45.6842728316542 
RMSE:	 6.759014190816157 
MAPE:	 5.403969846039035

MIDPOINT
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 48.516945521896595 
RMSE:	 6.965410649911217 
MAPE:	 5.700028346794376

T3
Prediction vs Close:		55.6% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 196.53845131752638 
RMSE:	 14.019217214863545 
MAPE:	 11.19411588232732

TEMA
Prediction vs Close:		50.0% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 33.21332596254529 
RMSE:	 5.763100377621866 
MAPE:	 4.8410208542529105
Runtime: mins: 17.451961848783338

Architecture used

In [ ]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving Experiment1.png to Experiment1 (1).png
In [ ]:
img = cv2.imread('Experiment1.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[ ]:
<matplotlib.image.AxesImage at 0x7f4c3a28ca50>

Excess kurtosis is a metric that compares the kurtosis of a distribution against the kurtosis of a normal distribution. The kurtosis of a normal distribution equals 3. Therefore, the excess kurtosis is found using the formula below:

Excess Kurtosis = Kurtosis – 3

Model Plots

In [ ]:
np.save("X_train_appl.npy", X_train)
np.save("y_train_appl.npy", y_train)
np.save("X_test_appl.npy", X_test)
np.save("y_test_appl.npy", y_test)
np.save("yc_train_appl.npy", yc_train)
np.save("yc_test_appl.npy", yc_test)
np.save('index_train_appl.npy', index_train)
np.save('index_test_appl.npy', index_test)
In [ ]:
list(simulation1.keys())
Out[ ]:
['SMA', 'EMA', 'WMA', 'DEMA', 'KAMA', 'MIDPOINT', 'T3', 'TEMA']
In [149]:
cd Baseline/
/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Archana - LSTM Hybrid/Outputs/Baseline
In [150]:
with open('simulation1_data.json') as json_file:
    simulation1 = json.load(json_file)
fileimg = 'Experiment1'
In [157]:
for i in range(len(list(simulation1.keys()))):
  SIM = list(simulation1.keys())[i]
  plot_train(simulation1,SIM)
  plot_test(simulation1,SIM)
----- Train RMSE for SMA ----- 8.101384407635955
----- Train_MSE_LSTM for SMA ----- 65.63242932028696
----- Train MAE LSTM for SMA ----- 7.12864265033611
----- Test RMSE for SMA----- 6.068911917484476
----- Test_MSE_LSTM for SMA----- 36.8316918621851
----- Test_MAE_LSTM for SMA----- 5.0566072358251795
----- Train RMSE for EMA ----- 9.261011933284287
----- Train_MSE_LSTM for EMA ----- 85.76634202843398
----- Train MAE LSTM for EMA ----- 8.125450484877646
----- Test RMSE for EMA----- 8.465370483543822
----- Test_MSE_LSTM for EMA----- 71.66249742365495
----- Test_MAE_LSTM for EMA----- 6.945165499596413
----- Train RMSE for WMA ----- 9.680943385400116
----- Train_MSE_LSTM for WMA ----- 93.72066483132228
----- Train MAE LSTM for WMA ----- 8.624995489295866
----- Test RMSE for WMA----- 7.359939097954609
----- Test_MSE_LSTM for WMA----- 54.1687035256009
----- Test_MAE_LSTM for WMA----- 6.023882427496638
----- Train RMSE for DEMA ----- 11.21130086144242
----- Train_MSE_LSTM for DEMA ----- 125.69326700577953
----- Train MAE LSTM for DEMA ----- 9.93110792796441
----- Test RMSE for DEMA----- 11.047191433294731
----- Test_MSE_LSTM for DEMA----- 122.04043856386052
----- Test_MAE_LSTM for DEMA----- 9.094033094452044
----- Train RMSE for KAMA ----- 9.898156197601981
----- Train_MSE_LSTM for KAMA ----- 97.97349611212653
----- Train MAE LSTM for KAMA ----- 8.903435802045117
----- Test RMSE for KAMA----- 6.759014190816157
----- Test_MSE_LSTM for KAMA----- 45.6842728316542
----- Test_MAE_LSTM for KAMA----- 5.403969846039035
----- Train RMSE for MIDPOINT ----- 8.590477476695101
----- Train_MSE_LSTM for MIDPOINT ----- 73.79630327760584
----- Train MAE LSTM for MIDPOINT ----- 7.599525995487992
----- Test RMSE for MIDPOINT----- 6.965410649911217
----- Test_MSE_LSTM for MIDPOINT----- 48.516945521896595
----- Test_MAE_LSTM for MIDPOINT----- 5.700028346794376
----- Train RMSE for T3 ----- 10.725412576084105
----- Train_MSE_LSTM for T3 ----- 115.03447492722309
----- Train MAE LSTM for T3 ----- 9.62275313761657
----- Test RMSE for T3----- 14.019217214863545
----- Test_MSE_LSTM for T3----- 196.53845131752638
----- Test_MAE_LSTM for T3----- 11.19411588232732
----- Train RMSE for TEMA ----- 6.8065683514131345
----- Train_MSE_LSTM for TEMA ----- 46.32937272245892
----- Train MAE LSTM for TEMA ----- 4.696228411463596
----- Test RMSE for TEMA----- 5.763100377621866
----- Test_MSE_LSTM for TEMA----- 33.21332596254529
----- Test_MAE_LSTM for TEMA----- 4.8410208542529105

Univariate Arima Multistep MutiVariate LSTM Hybrid Model Experiment 2

In [ ]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det = 20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # # Option 1
    # # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    # model.add(Dense(units=64,activation='relu'))
    # model.add(Dropout(0.5))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    # ## Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()


    # option 2
    model = Sequential()
    model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    model.add(Dense(64))
    model.add(Dense(units=output_dim))
    model.compile(optimizer=Adam(learning_rate = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM2.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()




    # Option 3
    # define custom activation
    # 
    # class Double_Tanh(Activation):
    #     def __init__(self, activation, **kwargs):
    #         super(Double_Tanh, self).__init__(activation, **kwargs)
    #         self.__name__ = 'double_tanh'

    # def double_tanh(x):
    #     return (K.tanh(x) * 2)

    # get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
    #     # Model Generation
    # model = Sequential()
    # #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    # model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    # model.add(Dense(1))
    # model.add(Activation(double_tanh))
    # model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
    # model.add(LSTM(units=int(lstm_len/2)))
    # model.add(Dense(1, activation='sigmoid'))
    # model.compile(loss='mean_squared_error', optimizer='adam')
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [ ]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation2 = {}
    imgfile = 'Experiment2'
    for ma in optimized_period:
              print(ma)
              print(functions[ma])
              print ( int( optimized_period[ma]))
            # if ma == 'SMA':
              low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
              low_vol = low_vol.fillna(0)
              low_vol_data = df['close']
              high_vol = pd.DataFrame()
              df2 = df.copy()
              for i in df2.columns:
                if i in low_vol.columns:
                  high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
              high_vol_data = df['close']
              ## *****************************************************
              # Generate ARIMA and LSTM predictions
              print('\nWorking on ' + ma + ' predictions')
              try:
                print('parameters used : ', train_len, test_len)
                low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
              except:
                  print('ARIMA error, skipping to next MA type')
                  continue
              Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
              final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
              mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
              rmse_ftr = mse_ftr ** 0.5
              mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
              mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

              final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
              mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
              rmse = mse ** 0.5
              mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              # Generate prediction accuracy
              actual = df['close'].tail(test_len).values
              result_1 = []
              result_2 = []
              for i in range(1, len(final_prediction)):
                  # Compare prediction to previous close price
                  if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                      result_1.append(1)
                  elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                      result_1.append(1)
                  else:
                      result_1.append(0)

                  # Compare prediction to previous prediction
                  if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                      result_2.append(1)
                  elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                      result_2.append(1)
                  else:
                      result_2.append(0)

              accuracy_1 = np.mean(result_1)
              accuracy_2 = np.mean(result_2)

              simulation2[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                            'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                            'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                            'rmse': rmse_ftr, 'mae' : mae_ftr},
                                'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                          'rmse': rmse, 'mae': mae },
                                'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

              # save simulation data here as checkpoint
              with open('simulation2_data.json', 'w') as fp:
                  json.dump(simulation2, fp)                 

              for ma in simulation2.keys():
                  print('\n' + ma)
                  print('Prediction vs Close:\t\t' + str(round(100*simulation2[ma]['accuracy']['prediction vs close'], 2))
                        + '% Accuracy')
                  print('Prediction vs Prediction:\t' + str(round(100*simulation2[ma]['accuracy']['prediction vs prediction'], 2))
                        + '% Accuracy')
                  print('MSE:\t', simulation2[ma]['final']['mse'],
                        '\nRMSE:\t', simulation2[ma]['final']['rmse'],
                        '\nMAPE:\t', simulation2[ma]['final']['mae'])#,
                        # '\nMAPE:\t', simulation[ma]['final']['mape'])
            # else:
            #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.63 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4157.020, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3687.148, Time=0.07 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.30 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3458.651, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3322.133, Time=0.12 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.97 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.06 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3324.133, Time=0.26 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.547 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1657.067
Date:                Sun, 12 Dec 2021   AIC                           3322.133
Time:                        19:03:12   BIC                           3340.897
Sample:                             0   HQIC                          3329.339
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1966      0.003   -387.226      0.000      -1.203      -1.191
ar.L2         -0.8952      0.006   -138.692      0.000      -0.908      -0.883
ar.L3         -0.3968      0.006    -68.284      0.000      -0.408      -0.385
sigma2         3.5858      0.017    214.535      0.000       3.553       3.619
===================================================================================
Ljung-Box (L1) (Q):                  14.47   Jarque-Bera (JB):           2428881.42
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       271.99
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.11243, saving model to LSTM2.h5
48/48 - 5s - loss: 0.1180 - accuracy: 0.0000e+00 - val_loss: 0.1124 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 5s/epoch - 104ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.11243 to 0.00640, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0341 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 0.0010 - 377ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.00640
48/48 - 0s - loss: 0.0046 - accuracy: 0.0000e+00 - val_loss: 0.0421 - val_accuracy: 0.0037 - lr: 0.0010 - 402ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.00640
48/48 - 0s - loss: 0.0066 - accuracy: 0.0000e+00 - val_loss: 0.0111 - val_accuracy: 0.0037 - lr: 0.0010 - 378ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00640
48/48 - 0s - loss: 0.0033 - accuracy: 0.0000e+00 - val_loss: 0.0263 - val_accuracy: 0.0037 - lr: 0.0010 - 358ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00640
48/48 - 0s - loss: 0.0073 - accuracy: 0.0000e+00 - val_loss: 0.0235 - val_accuracy: 0.0037 - lr: 0.0010 - 396ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.00640
48/48 - 0s - loss: 0.0148 - accuracy: 0.0000e+00 - val_loss: 0.0293 - val_accuracy: 0.0037 - lr: 0.0010 - 351ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00640
48/48 - 0s - loss: 0.0199 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 356ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00640
48/48 - 0s - loss: 0.0031 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 369ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00640
48/48 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 353ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00640
48/48 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0088 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 373ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.00640
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 355ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00640
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 349ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00640
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 386ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00640
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0093 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 371ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00640
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0093 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 358ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.00640
48/48 - 0s - loss: 9.9033e-04 - accuracy: 0.0000e+00 - val_loss: 0.0094 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 368ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00640
48/48 - 0s - loss: 9.8092e-04 - accuracy: 0.0000e+00 - val_loss: 0.0094 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 372ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00640
48/48 - 0s - loss: 9.7192e-04 - accuracy: 0.0000e+00 - val_loss: 0.0095 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 376ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00640
48/48 - 0s - loss: 9.6328e-04 - accuracy: 0.0000e+00 - val_loss: 0.0096 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 372ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00640
48/48 - 0s - loss: 9.5499e-04 - accuracy: 0.0000e+00 - val_loss: 0.0096 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 349ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00640
48/48 - 0s - loss: 9.4705e-04 - accuracy: 0.0000e+00 - val_loss: 0.0097 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 374ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00640
48/48 - 0s - loss: 9.3945e-04 - accuracy: 0.0000e+00 - val_loss: 0.0098 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 348ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00640
48/48 - 0s - loss: 9.3219e-04 - accuracy: 0.0000e+00 - val_loss: 0.0099 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 362ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00640
48/48 - 0s - loss: 9.2529e-04 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 374ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00640
48/48 - 0s - loss: 9.1873e-04 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 355ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00640
48/48 - 0s - loss: 9.1252e-04 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 361ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00640
48/48 - 0s - loss: 9.0664e-04 - accuracy: 0.0000e+00 - val_loss: 0.0102 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 383ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00640
48/48 - 0s - loss: 9.0110e-04 - accuracy: 0.0000e+00 - val_loss: 0.0103 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 370ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.9588e-04 - accuracy: 0.0000e+00 - val_loss: 0.0104 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 391ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.9098e-04 - accuracy: 0.0000e+00 - val_loss: 0.0105 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 369ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.8637e-04 - accuracy: 0.0000e+00 - val_loss: 0.0105 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 360ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.8205e-04 - accuracy: 0.0000e+00 - val_loss: 0.0106 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 379ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.7801e-04 - accuracy: 0.0000e+00 - val_loss: 0.0107 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 367ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.7422e-04 - accuracy: 0.0000e+00 - val_loss: 0.0108 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 378ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.7068e-04 - accuracy: 0.0000e+00 - val_loss: 0.0109 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 372ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.6737e-04 - accuracy: 0.0000e+00 - val_loss: 0.0110 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 353ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.6427e-04 - accuracy: 0.0000e+00 - val_loss: 0.0111 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 371ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.6137e-04 - accuracy: 0.0000e+00 - val_loss: 0.0112 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 358ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.5865e-04 - accuracy: 0.0000e+00 - val_loss: 0.0112 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 353ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.5610e-04 - accuracy: 0.0000e+00 - val_loss: 0.0113 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 369ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.5370e-04 - accuracy: 0.0000e+00 - val_loss: 0.0114 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 349ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.5145e-04 - accuracy: 0.0000e+00 - val_loss: 0.0115 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 367ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.4932e-04 - accuracy: 0.0000e+00 - val_loss: 0.0116 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 360ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.4732e-04 - accuracy: 0.0000e+00 - val_loss: 0.0117 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 344ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.4542e-04 - accuracy: 0.0000e+00 - val_loss: 0.0117 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 373ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.4361e-04 - accuracy: 0.0000e+00 - val_loss: 0.0118 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 360ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.4189e-04 - accuracy: 0.0000e+00 - val_loss: 0.0119 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 356ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.4025e-04 - accuracy: 0.0000e+00 - val_loss: 0.0120 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 371ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.3867e-04 - accuracy: 0.0000e+00 - val_loss: 0.0120 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 363ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.3716e-04 - accuracy: 0.0000e+00 - val_loss: 0.0121 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 381ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00640
48/48 - 0s - loss: 8.3570e-04 - accuracy: 0.0000e+00 - val_loss: 0.0122 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 352ms/epoch - 7ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 53.7298089742155 
RMSE:	 7.330062003435954 
MAPE:	 5.9334980672321835
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.57 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4231.556, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3761.238, Time=0.07 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.37 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3532.227, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3394.496, Time=0.13 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.16 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.87 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3396.496, Time=0.26 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.576 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1693.248
Date:                Sun, 12 Dec 2021   AIC                           3394.496
Time:                        19:05:25   BIC                           3413.260
Sample:                             0   HQIC                          3401.702
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.569      0.000      -1.204      -1.192
ar.L2         -0.8976      0.006   -139.811      0.000      -0.910      -0.885
ar.L3         -0.3984      0.006    -68.662      0.000      -0.410      -0.387
sigma2         3.9230      0.018    215.372      0.000       3.887       3.959
===================================================================================
Ljung-Box (L1) (Q):                  14.54   Jarque-Bera (JB):           2462173.05
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.82
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.26837, saving model to LSTM2.h5
16/16 - 5s - loss: 0.1757 - accuracy: 0.0000e+00 - val_loss: 0.2684 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 5s/epoch - 307ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.26837 to 0.02531, saving model to LSTM2.h5
16/16 - 0s - loss: 0.1147 - accuracy: 0.0000e+00 - val_loss: 0.0253 - val_accuracy: 0.0037 - lr: 0.0010 - 166ms/epoch - 10ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.02531 to 0.01506, saving model to LSTM2.h5
16/16 - 0s - loss: 0.0203 - accuracy: 0.0000e+00 - val_loss: 0.0151 - val_accuracy: 0.0037 - lr: 0.0010 - 157ms/epoch - 10ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.01506
16/16 - 0s - loss: 0.0047 - accuracy: 0.0000e+00 - val_loss: 0.0268 - val_accuracy: 0.0037 - lr: 0.0010 - 183ms/epoch - 11ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.01506 to 0.00975, saving model to LSTM2.h5
16/16 - 0s - loss: 0.0050 - accuracy: 0.0000e+00 - val_loss: 0.0098 - val_accuracy: 0.0037 - lr: 0.0010 - 176ms/epoch - 11ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.00975 to 0.00680, saving model to LSTM2.h5
16/16 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 0.0010 - 227ms/epoch - 14ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00680
16/16 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 0.0010 - 156ms/epoch - 10ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00680
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 0.0010 - 179ms/epoch - 11ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00680
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 0.0010 - 143ms/epoch - 9ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00680
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 0.0010 - 138ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00011: val_loss did not improve from 0.00680
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 0.0010 - 136ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.7618e-04 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 142ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.7391e-04 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 186ms/epoch - 12ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.7096e-04 - accuracy: 0.0000e+00 - val_loss: 0.0082 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 188ms/epoch - 12ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.6892e-04 - accuracy: 0.0000e+00 - val_loss: 0.0082 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 138ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00016: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.6702e-04 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 142ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.6383e-04 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 144ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.6363e-04 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 189ms/epoch - 12ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.6342e-04 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 153ms/epoch - 10ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.6322e-04 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 139ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00021: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.6301e-04 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 139ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.6279e-04 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.6258e-04 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.6235e-04 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 140ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.6213e-04 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 141ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.6190e-04 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 154ms/epoch - 10ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.6166e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 136ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.6142e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 181ms/epoch - 11ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.6118e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.6093e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 142ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.6068e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.6042e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 161ms/epoch - 10ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.6016e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5990e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 139ms/epoch - 9ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5963e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 143ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5936e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 144ms/epoch - 9ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5908e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 159ms/epoch - 10ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5880e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5852e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5824e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 138ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5795e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 137ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5765e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 137ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5735e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5705e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 151ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5675e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 154ms/epoch - 10ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5644e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 151ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5613e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 140ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5582e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 179ms/epoch - 11ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5550e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 142ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5518e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 141ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5485e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5452e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 189ms/epoch - 12ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5419e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 144ms/epoch - 9ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5385e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 157ms/epoch - 10ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5351e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00680
16/16 - 0s - loss: 9.5317e-04 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step
Epoch 00056: early stopping
SMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 53.7298089742155 
RMSE:	 7.330062003435954 
MAPE:	 5.9334980672321835

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 55.817061136968896 
RMSE:	 7.4710816577634125 
MAPE:	 6.256840285842
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.57 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4264.089, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3793.930, Time=0.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.32 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3564.923, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3427.258, Time=0.10 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.64 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.61 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3429.258, Time=0.24 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.699 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1709.629
Date:                Sun, 12 Dec 2021   AIC                           3427.258
Time:                        19:07:09   BIC                           3446.021
Sample:                             0   HQIC                          3434.464
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1981      0.003   -389.386      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.699      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.737      0.000      -0.410      -0.387
sigma2         4.0860      0.019    215.311      0.000       4.049       4.123
===================================================================================
Ljung-Box (L1) (Q):                  14.57   Jarque-Bera (JB):           2460901.70
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.75
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.08323, saving model to LSTM2.h5
17/17 - 5s - loss: 0.1026 - accuracy: 0.0000e+00 - val_loss: 0.0832 - val_accuracy: 0.0037 - lr: 0.0010 - 5s/epoch - 308ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.08323 to 0.04614, saving model to LSTM2.h5
17/17 - 0s - loss: 0.0467 - accuracy: 0.0000e+00 - val_loss: 0.0461 - val_accuracy: 0.0037 - lr: 0.0010 - 174ms/epoch - 10ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.04614 to 0.00671, saving model to LSTM2.h5
17/17 - 0s - loss: 0.0104 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 0.0010 - 162ms/epoch - 10ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.00671
17/17 - 0s - loss: 0.0030 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 0.0010 - 149ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00671
17/17 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0110 - val_accuracy: 0.0037 - lr: 0.0010 - 149ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00671
17/17 - 0s - loss: 9.3131e-04 - accuracy: 0.0000e+00 - val_loss: 0.0136 - val_accuracy: 0.0037 - lr: 0.0010 - 164ms/epoch - 10ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00671
17/17 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0140 - val_accuracy: 0.0037 - lr: 0.0010 - 156ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00008: val_loss did not improve from 0.00671
17/17 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 0.0010 - 147ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00671
17/17 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0120 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 149ms/epoch - 9ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.6973e-04 - accuracy: 0.0000e+00 - val_loss: 0.0119 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 156ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.6215e-04 - accuracy: 0.0000e+00 - val_loss: 0.0127 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 152ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.5632e-04 - accuracy: 0.0000e+00 - val_loss: 0.0129 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 151ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00013: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.5088e-04 - accuracy: 0.0000e+00 - val_loss: 0.0130 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 157ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4889e-04 - accuracy: 0.0000e+00 - val_loss: 0.0131 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4862e-04 - accuracy: 0.0000e+00 - val_loss: 0.0131 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 144ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4838e-04 - accuracy: 0.0000e+00 - val_loss: 0.0131 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 138ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4816e-04 - accuracy: 0.0000e+00 - val_loss: 0.0132 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 196ms/epoch - 12ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00018: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4795e-04 - accuracy: 0.0000e+00 - val_loss: 0.0132 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 154ms/epoch - 9ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4774e-04 - accuracy: 0.0000e+00 - val_loss: 0.0132 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 154ms/epoch - 9ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4753e-04 - accuracy: 0.0000e+00 - val_loss: 0.0132 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 151ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4731e-04 - accuracy: 0.0000e+00 - val_loss: 0.0132 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 144ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4707e-04 - accuracy: 0.0000e+00 - val_loss: 0.0133 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 140ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4684e-04 - accuracy: 0.0000e+00 - val_loss: 0.0133 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 157ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4659e-04 - accuracy: 0.0000e+00 - val_loss: 0.0133 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4634e-04 - accuracy: 0.0000e+00 - val_loss: 0.0133 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 153ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4608e-04 - accuracy: 0.0000e+00 - val_loss: 0.0134 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 190ms/epoch - 11ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4581e-04 - accuracy: 0.0000e+00 - val_loss: 0.0134 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4554e-04 - accuracy: 0.0000e+00 - val_loss: 0.0134 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 143ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4527e-04 - accuracy: 0.0000e+00 - val_loss: 0.0134 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 200ms/epoch - 12ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4499e-04 - accuracy: 0.0000e+00 - val_loss: 0.0135 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4470e-04 - accuracy: 0.0000e+00 - val_loss: 0.0135 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 152ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4441e-04 - accuracy: 0.0000e+00 - val_loss: 0.0135 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 161ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4411e-04 - accuracy: 0.0000e+00 - val_loss: 0.0135 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4381e-04 - accuracy: 0.0000e+00 - val_loss: 0.0136 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4350e-04 - accuracy: 0.0000e+00 - val_loss: 0.0136 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 157ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4319e-04 - accuracy: 0.0000e+00 - val_loss: 0.0136 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 151ms/epoch - 9ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4287e-04 - accuracy: 0.0000e+00 - val_loss: 0.0136 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 160ms/epoch - 9ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4255e-04 - accuracy: 0.0000e+00 - val_loss: 0.0136 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 160ms/epoch - 9ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4222e-04 - accuracy: 0.0000e+00 - val_loss: 0.0137 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 142ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4189e-04 - accuracy: 0.0000e+00 - val_loss: 0.0137 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 143ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4155e-04 - accuracy: 0.0000e+00 - val_loss: 0.0137 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 142ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4121e-04 - accuracy: 0.0000e+00 - val_loss: 0.0137 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 152ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4086e-04 - accuracy: 0.0000e+00 - val_loss: 0.0138 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4051e-04 - accuracy: 0.0000e+00 - val_loss: 0.0138 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.4015e-04 - accuracy: 0.0000e+00 - val_loss: 0.0138 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 189ms/epoch - 11ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.3979e-04 - accuracy: 0.0000e+00 - val_loss: 0.0138 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 191ms/epoch - 11ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.3942e-04 - accuracy: 0.0000e+00 - val_loss: 0.0139 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 144ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.3905e-04 - accuracy: 0.0000e+00 - val_loss: 0.0139 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 166ms/epoch - 10ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.3867e-04 - accuracy: 0.0000e+00 - val_loss: 0.0139 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 144ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.3829e-04 - accuracy: 0.0000e+00 - val_loss: 0.0140 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 155ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.3790e-04 - accuracy: 0.0000e+00 - val_loss: 0.0140 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 151ms/epoch - 9ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.3751e-04 - accuracy: 0.0000e+00 - val_loss: 0.0140 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00671
17/17 - 0s - loss: 8.3711e-04 - accuracy: 0.0000e+00 - val_loss: 0.0140 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 188ms/epoch - 11ms/step
Epoch 00053: early stopping
SMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 53.7298089742155 
RMSE:	 7.330062003435954 
MAPE:	 5.9334980672321835

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 55.817061136968896 
RMSE:	 7.4710816577634125 
MAPE:	 6.256840285842

WMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 59.16871259968053 
RMSE:	 7.692120162847206 
MAPE:	 6.209776446209727
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.59 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4436.126, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3965.317, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.49 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3736.589, Time=0.11 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3598.951, Time=0.11 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.21 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.20 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3600.951, Time=0.23 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.051 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1795.475
Date:                Sun, 12 Dec 2021   AIC                           3598.951
Time:                        19:08:51   BIC                           3617.714
Sample:                             0   HQIC                          3606.157
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1983      0.003   -389.581      0.000      -1.204      -1.192
ar.L2         -0.8973      0.006   -139.732      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.649      0.000      -0.410      -0.387
sigma2         5.0573      0.023    215.292      0.000       5.011       5.103
===================================================================================
Ljung-Box (L1) (Q):                  14.41   Jarque-Bera (JB):           2460553.80
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.89
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.74
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.21773, saving model to LSTM2.h5
10/10 - 5s - loss: 0.2197 - accuracy: 0.0000e+00 - val_loss: 0.2177 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 5s/epoch - 516ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.21773 to 0.01040, saving model to LSTM2.h5
10/10 - 0s - loss: 0.0982 - accuracy: 0.0000e+00 - val_loss: 0.0104 - val_accuracy: 0.0037 - lr: 0.0010 - 141ms/epoch - 14ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0748 - accuracy: 0.0000e+00 - val_loss: 0.1603 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 104ms/epoch - 10ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0384 - accuracy: 0.0000e+00 - val_loss: 0.0200 - val_accuracy: 0.0037 - lr: 0.0010 - 106ms/epoch - 11ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0237 - accuracy: 0.0000e+00 - val_loss: 0.0791 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 143ms/epoch - 14ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0147 - accuracy: 0.0000e+00 - val_loss: 0.0180 - val_accuracy: 0.0037 - lr: 0.0010 - 97ms/epoch - 10ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0068 - accuracy: 0.0000e+00 - val_loss: 0.0432 - val_accuracy: 0.0037 - lr: 0.0010 - 94ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0055 - accuracy: 0.0000e+00 - val_loss: 0.0396 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 96ms/epoch - 10ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0030 - accuracy: 0.0000e+00 - val_loss: 0.0333 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 98ms/epoch - 10ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0294 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 103ms/epoch - 10ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0285 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 116ms/epoch - 12ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0293 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 103ms/epoch - 10ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0294 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0295 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 11ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0296 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 95ms/epoch - 10ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0298 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0299 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0300 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 10ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0301 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 138ms/epoch - 14ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0302 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 109ms/epoch - 11ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0302 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 114ms/epoch - 11ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0303 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 11ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0304 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0305 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 99ms/epoch - 10ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0305 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 10ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0306 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 11ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0306 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 99ms/epoch - 10ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0307 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0307 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 118ms/epoch - 12ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0308 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 11ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0308 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 11ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0308 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0309 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 99ms/epoch - 10ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0309 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0310 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 11ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0310 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0310 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 144ms/epoch - 14ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0311 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 111ms/epoch - 11ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0311 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 10ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0311 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 104ms/epoch - 10ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0312 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0312 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 11ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0312 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0312 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 11ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0313 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 114ms/epoch - 11ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0313 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 11ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0313 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 118ms/epoch - 12ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0314 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 106ms/epoch - 11ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0314 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0314 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 94ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0315 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01040
10/10 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0315 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 53.7298089742155 
RMSE:	 7.330062003435954 
MAPE:	 5.9334980672321835

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 55.817061136968896 
RMSE:	 7.4710816577634125 
MAPE:	 6.256840285842

WMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 59.16871259968053 
RMSE:	 7.692120162847206 
MAPE:	 6.209776446209727

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 121.68296982011337 
RMSE:	 11.031000399787564 
MAPE:	 9.966678894256134
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.46 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4190.464, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3724.371, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.37 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3494.154, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3357.435, Time=0.13 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.51 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.95 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3359.435, Time=0.29 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.898 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1674.717
Date:                Sun, 12 Dec 2021   AIC                           3357.435
Time:                        19:10:27   BIC                           3376.198
Sample:                             0   HQIC                          3364.641
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1955      0.003   -381.246      0.000      -1.202      -1.189
ar.L2         -0.8964      0.007   -135.835      0.000      -0.909      -0.883
ar.L3         -0.3971      0.006    -67.229      0.000      -0.409      -0.385
sigma2         3.7466      0.018    211.623      0.000       3.712       3.781
===================================================================================
Ljung-Box (L1) (Q):                  14.20   Jarque-Bera (JB):           2338363.32
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                             3.76
Prob(H) (two-sided):                  0.00   Kurtosis:                       266.93
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.01861, saving model to LSTM2.h5
45/45 - 5s - loss: 0.1464 - accuracy: 0.0000e+00 - val_loss: 0.0186 - val_accuracy: 0.0037 - lr: 0.0010 - 5s/epoch - 120ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.01861 to 0.00620, saving model to LSTM2.h5
45/45 - 0s - loss: 0.0217 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 0.0010 - 395ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.00620 to 0.00583, saving model to LSTM2.h5
45/45 - 0s - loss: 0.0082 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 0.0010 - 376ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.00583 to 0.00360, saving model to LSTM2.h5
45/45 - 0s - loss: 0.0104 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 0.0010 - 371ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00360
45/45 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 0.0010 - 344ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00360
45/45 - 0s - loss: 0.0038 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 0.0010 - 357ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00360
45/45 - 0s - loss: 0.0045 - accuracy: 0.0000e+00 - val_loss: 0.0105 - val_accuracy: 0.0037 - lr: 0.0010 - 355ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00360
45/45 - 0s - loss: 0.0096 - accuracy: 0.0000e+00 - val_loss: 0.0102 - val_accuracy: 0.0037 - lr: 0.0010 - 347ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00009: val_loss did not improve from 0.00360
45/45 - 0s - loss: 0.0180 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 0.0010 - 350ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00360
45/45 - 0s - loss: 0.0148 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 341ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00360
45/45 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 350ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00360
45/45 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 360ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00360
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 355ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00014: val_loss did not improve from 0.00360
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 369ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00360
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 364ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00360
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 347ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00360
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 334ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00360
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 353ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00019: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.9860e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 344ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.9188e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 341ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.8533e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 357ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.7896e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 352ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.7277e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 361ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.6675e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 362ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.6092e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 352ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.5529e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 372ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.4985e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 335ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.4461e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 349ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.3956e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 362ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.3472e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 352ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.3006e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 366ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.2561e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 342ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.2134e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 347ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.1725e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 355ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.1335e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 340ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.0961e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 355ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.0604e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 350ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00360
45/45 - 0s - loss: 9.0262e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 338ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00360
45/45 - 0s - loss: 8.9936e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 367ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00360
45/45 - 0s - loss: 8.9623e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 365ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00360
45/45 - 0s - loss: 8.9324e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 343ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00360
45/45 - 0s - loss: 8.9036e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 362ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00360
45/45 - 0s - loss: 8.8761e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 360ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00360
45/45 - 0s - loss: 8.8496e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 336ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00360
45/45 - 0s - loss: 8.8241e-04 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 389ms/epoch - 9ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00360
45/45 - 0s - loss: 8.7995e-04 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 338ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00360
45/45 - 0s - loss: 8.7757e-04 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 351ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00360
45/45 - 0s - loss: 8.7527e-04 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 352ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00360
45/45 - 0s - loss: 8.7303e-04 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 341ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00360
45/45 - 0s - loss: 8.7086e-04 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 354ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00360
45/45 - 0s - loss: 8.6874e-04 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 357ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00360
45/45 - 0s - loss: 8.6667e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 338ms/epoch - 8ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00360
45/45 - 0s - loss: 8.6465e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 352ms/epoch - 8ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00360
45/45 - 0s - loss: 8.6266e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 353ms/epoch - 8ms/step
Epoch 00054: early stopping
SMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 53.7298089742155 
RMSE:	 7.330062003435954 
MAPE:	 5.9334980672321835

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 55.817061136968896 
RMSE:	 7.4710816577634125 
MAPE:	 6.256840285842

WMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 59.16871259968053 
RMSE:	 7.692120162847206 
MAPE:	 6.209776446209727

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 121.68296982011337 
RMSE:	 11.031000399787564 
MAPE:	 9.966678894256134

KAMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 58.818484868395416 
RMSE:	 7.669321017430123 
MAPE:	 6.375419548070835
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.47 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4212.289, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3747.746, Time=0.07 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.35 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3523.401, Time=0.12 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3387.759, Time=0.13 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.57 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.12 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3389.758, Time=0.28 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.157 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1689.879
Date:                Sun, 12 Dec 2021   AIC                           3387.759
Time:                        19:12:41   BIC                           3406.522
Sample:                             0   HQIC                          3394.964
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1878      0.003   -345.315      0.000      -1.195      -1.181
ar.L2         -0.8876      0.007   -121.809      0.000      -0.902      -0.873
ar.L3         -0.3957      0.007    -60.127      0.000      -0.409      -0.383
sigma2         3.8904      0.020    193.404      0.000       3.851       3.930
===================================================================================
Ljung-Box (L1) (Q):                  13.21   Jarque-Bera (JB):           1659080.01
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.08   Skew:                             3.28
Prob(H) (two-sided):                  0.00   Kurtosis:                       225.31
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.01020, saving model to LSTM2.h5
58/58 - 5s - loss: 0.1443 - accuracy: 0.0000e+00 - val_loss: 0.0102 - val_accuracy: 0.0037 - lr: 0.0010 - 5s/epoch - 88ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.01020
58/58 - 0s - loss: 0.0551 - accuracy: 0.0000e+00 - val_loss: 0.0161 - val_accuracy: 0.0037 - lr: 0.0010 - 448ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01020
58/58 - 0s - loss: 0.0087 - accuracy: 0.0000e+00 - val_loss: 0.0358 - val_accuracy: 0.0037 - lr: 0.0010 - 443ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.01020 to 0.00561, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0088 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 0.0010 - 464ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00561
58/58 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0109 - val_accuracy: 0.0037 - lr: 0.0010 - 437ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00561
58/58 - 0s - loss: 0.0034 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 0.0010 - 446ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00561
58/58 - 0s - loss: 0.0031 - accuracy: 0.0000e+00 - val_loss: 0.0096 - val_accuracy: 0.0037 - lr: 0.0010 - 442ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00561
58/58 - 0s - loss: 0.0039 - accuracy: 0.0000e+00 - val_loss: 0.0099 - val_accuracy: 0.0037 - lr: 0.0010 - 451ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00009: val_loss did not improve from 0.00561
58/58 - 0s - loss: 0.0056 - accuracy: 0.0000e+00 - val_loss: 0.0126 - val_accuracy: 0.0037 - lr: 0.0010 - 433ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00561
58/58 - 0s - loss: 0.0122 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 429ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00561
58/58 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 436ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.00561 to 0.00537, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 456ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.00537 to 0.00499, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 467ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.00499 to 0.00464, saving model to LSTM2.h5
58/58 - 1s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 523ms/epoch - 9ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.00464 to 0.00434, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 473ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.00434 to 0.00411, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 448ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.00411 to 0.00393, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 460ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss improved from 0.00393 to 0.00380, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 443ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.00380 to 0.00370, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 474ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss improved from 0.00370 to 0.00364, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 451ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss improved from 0.00364 to 0.00361, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 456ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00361
58/58 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 510ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00361
58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 439ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00361
58/58 - 0s - loss: 9.9791e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 432ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00025: val_loss did not improve from 0.00361
58/58 - 1s - loss: 9.7807e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 510ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00361
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 434ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00361
58/58 - 0s - loss: 9.0638e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 431ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.6817e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 433ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.5861e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 422ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00030: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.5492e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 445ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.5264e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 430ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00361
58/58 - 1s - loss: 8.5077e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 506ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.4904e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 427ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.4735e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 459ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.4566e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 435ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.4395e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 446ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.4224e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 447ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.4050e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 434ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00361
58/58 - 1s - loss: 8.3875e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 514ms/epoch - 9ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.3697e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 423ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.3517e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 439ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.3335e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 429ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.3150e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 444ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00361
58/58 - 1s - loss: 8.2962e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 510ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.2771e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 447ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00361
58/58 - 1s - loss: 8.2578e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 508ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.2383e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 450ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.2184e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 431ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00361
58/58 - 1s - loss: 8.1983e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 506ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.1780e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 429ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.1574e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 430ms/epoch - 7ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.1365e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 455ms/epoch - 8ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.1154e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 422ms/epoch - 7ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.0941e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 446ms/epoch - 8ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.0726e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 425ms/epoch - 7ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00361
58/58 - 1s - loss: 8.0508e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 525ms/epoch - 9ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.0289e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 428ms/epoch - 7ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00361
58/58 - 0s - loss: 8.0068e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 446ms/epoch - 8ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00361
58/58 - 1s - loss: 7.9846e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 515ms/epoch - 9ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00361
58/58 - 0s - loss: 7.9621e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 453ms/epoch - 8ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00361
58/58 - 0s - loss: 7.9396e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 447ms/epoch - 8ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00361
58/58 - 0s - loss: 7.9169e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 428ms/epoch - 7ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00361
58/58 - 0s - loss: 7.8941e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 441ms/epoch - 8ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.00361
58/58 - 0s - loss: 7.8713e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 426ms/epoch - 7ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.00361
58/58 - 1s - loss: 7.8483e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 530ms/epoch - 9ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.00361
58/58 - 0s - loss: 7.8253e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 411ms/epoch - 7ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.00361
58/58 - 0s - loss: 7.8023e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 435ms/epoch - 8ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.00361
58/58 - 0s - loss: 7.7793e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 415ms/epoch - 7ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.00361
58/58 - 0s - loss: 7.7562e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 425ms/epoch - 7ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.00361
58/58 - 0s - loss: 7.7332e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 430ms/epoch - 7ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.00361
58/58 - 0s - loss: 7.7102e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 411ms/epoch - 7ms/step
Epoch 00071: early stopping
SMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 53.7298089742155 
RMSE:	 7.330062003435954 
MAPE:	 5.9334980672321835

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 55.817061136968896 
RMSE:	 7.4710816577634125 
MAPE:	 6.256840285842

WMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 59.16871259968053 
RMSE:	 7.692120162847206 
MAPE:	 6.209776446209727

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 121.68296982011337 
RMSE:	 11.031000399787564 
MAPE:	 9.966678894256134

KAMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 58.818484868395416 
RMSE:	 7.669321017430123 
MAPE:	 6.375419548070835

MIDPOINT
Prediction vs Close:		51.12% Accuracy
Prediction vs Prediction:	44.4% Accuracy
MSE:	 65.76580507888394 
RMSE:	 8.109611894467204 
MAPE:	 6.6791438644890375
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.47 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4414.515, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3944.062, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.48 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3715.173, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3577.471, Time=0.10 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.82 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.76 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3579.471, Time=0.25 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.070 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1784.736
Date:                Sun, 12 Dec 2021   AIC                           3577.471
Time:                        19:14:55   BIC                           3596.235
Sample:                             0   HQIC                          3584.677
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.844      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.861      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.862      0.000      -0.410      -0.387
sigma2         4.9242      0.023    215.469      0.000       4.879       4.969
===================================================================================
Ljung-Box (L1) (Q):                  14.55   Jarque-Bera (JB):           2468024.38
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       274.15
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.19217, saving model to LSTM2.h5
43/43 - 5s - loss: 0.1195 - accuracy: 0.0000e+00 - val_loss: 0.1922 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 5s/epoch - 116ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.19217 to 0.00610, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0489 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 0.0010 - 388ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0109 - accuracy: 0.0000e+00 - val_loss: 0.1014 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 334ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0126 - accuracy: 0.0000e+00 - val_loss: 0.0126 - val_accuracy: 0.0037 - lr: 0.0010 - 340ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0032 - accuracy: 0.0000e+00 - val_loss: 0.0625 - val_accuracy: 0.0037 - lr: 0.0010 - 336ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0029 - accuracy: 0.0000e+00 - val_loss: 0.0249 - val_accuracy: 0.0037 - lr: 0.0010 - 338ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0441 - val_accuracy: 0.0037 - lr: 0.0010 - 341ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0039 - accuracy: 0.0000e+00 - val_loss: 0.0247 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 337ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0256 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 344ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0238 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 340ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0228 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 334ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0219 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 333ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0216 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 332ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0213 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 339ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0211 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 336ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0210 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 344ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0209 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 352ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0208 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 344ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0207 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 335ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0207 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 333ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0206 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 346ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0206 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 315ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0205 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 331ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0205 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 352ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0204 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 325ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0204 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 346ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0203 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 338ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0202 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 337ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0202 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 357ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0201 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 360ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0201 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 323ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00610
43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0200 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 344ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.9869e-04 - accuracy: 0.0000e+00 - val_loss: 0.0200 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 343ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.9631e-04 - accuracy: 0.0000e+00 - val_loss: 0.0199 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 328ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.9389e-04 - accuracy: 0.0000e+00 - val_loss: 0.0199 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 340ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.9143e-04 - accuracy: 0.0000e+00 - val_loss: 0.0199 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 335ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.8895e-04 - accuracy: 0.0000e+00 - val_loss: 0.0198 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 333ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.8643e-04 - accuracy: 0.0000e+00 - val_loss: 0.0198 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 344ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.8389e-04 - accuracy: 0.0000e+00 - val_loss: 0.0197 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 341ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.8132e-04 - accuracy: 0.0000e+00 - val_loss: 0.0197 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 321ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.7872e-04 - accuracy: 0.0000e+00 - val_loss: 0.0196 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 352ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.7609e-04 - accuracy: 0.0000e+00 - val_loss: 0.0196 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 334ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.7344e-04 - accuracy: 0.0000e+00 - val_loss: 0.0195 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 345ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.7077e-04 - accuracy: 0.0000e+00 - val_loss: 0.0195 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 352ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.6808e-04 - accuracy: 0.0000e+00 - val_loss: 0.0195 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 332ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.6537e-04 - accuracy: 0.0000e+00 - val_loss: 0.0194 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 332ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.6264e-04 - accuracy: 0.0000e+00 - val_loss: 0.0194 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 348ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.5990e-04 - accuracy: 0.0000e+00 - val_loss: 0.0194 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 331ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.5714e-04 - accuracy: 0.0000e+00 - val_loss: 0.0193 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 352ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.5436e-04 - accuracy: 0.0000e+00 - val_loss: 0.0193 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 345ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.5157e-04 - accuracy: 0.0000e+00 - val_loss: 0.0193 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 340ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00610
43/43 - 0s - loss: 9.4878e-04 - accuracy: 0.0000e+00 - val_loss: 0.0192 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 329ms/epoch - 8ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 53.7298089742155 
RMSE:	 7.330062003435954 
MAPE:	 5.9334980672321835

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 55.817061136968896 
RMSE:	 7.4710816577634125 
MAPE:	 6.256840285842

WMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 59.16871259968053 
RMSE:	 7.692120162847206 
MAPE:	 6.209776446209727

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 121.68296982011337 
RMSE:	 11.031000399787564 
MAPE:	 9.966678894256134

KAMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 58.818484868395416 
RMSE:	 7.669321017430123 
MAPE:	 6.375419548070835

MIDPOINT
Prediction vs Close:		51.12% Accuracy
Prediction vs Prediction:	44.4% Accuracy
MSE:	 65.76580507888394 
RMSE:	 8.109611894467204 
MAPE:	 6.6791438644890375

T3
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 100.97165975835705 
RMSE:	 10.048465542477471 
MAPE:	 8.008063201476219
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.67 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4352.703, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3889.412, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.35 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3689.930, Time=0.07 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3574.245, Time=0.16 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.51 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.02 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3576.245, Time=0.24 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.135 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1783.123
Date:                Sun, 12 Dec 2021   AIC                           3574.245
Time:                        19:16:42   BIC                           3593.008
Sample:                             0   HQIC                          3581.451
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1480      0.004   -302.430      0.000      -1.155      -1.141
ar.L2         -0.8300      0.008    -99.682      0.000      -0.846      -0.814
ar.L3         -0.3687      0.007    -50.527      0.000      -0.383      -0.354
sigma2         4.9055      0.028    175.970      0.000       4.851       4.960
===================================================================================
Ljung-Box (L1) (Q):                  11.61   Jarque-Bera (JB):           1261976.58
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.16   Skew:                             2.52
Prob(H) (two-sided):                  0.00   Kurtosis:                       196.90
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.03897, saving model to LSTM2.h5
90/90 - 6s - loss: 0.0870 - accuracy: 0.0000e+00 - val_loss: 0.0390 - val_accuracy: 0.0037 - lr: 0.0010 - 6s/epoch - 66ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.03897
90/90 - 1s - loss: 0.0496 - accuracy: 0.0000e+00 - val_loss: 0.0700 - val_accuracy: 0.0037 - lr: 0.0010 - 669ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.03897
90/90 - 1s - loss: 0.0402 - accuracy: 0.0000e+00 - val_loss: 0.0603 - val_accuracy: 0.0037 - lr: 0.0010 - 711ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.03897
90/90 - 1s - loss: 0.0337 - accuracy: 0.0000e+00 - val_loss: 0.0429 - val_accuracy: 0.0037 - lr: 0.0010 - 688ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.03897 to 0.03103, saving model to LSTM2.h5
90/90 - 1s - loss: 0.0260 - accuracy: 0.0000e+00 - val_loss: 0.0310 - val_accuracy: 0.0037 - lr: 0.0010 - 681ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.03103 to 0.02503, saving model to LSTM2.h5
90/90 - 1s - loss: 0.0212 - accuracy: 0.0000e+00 - val_loss: 0.0250 - val_accuracy: 0.0037 - lr: 0.0010 - 693ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.02503 to 0.02024, saving model to LSTM2.h5
90/90 - 1s - loss: 0.0152 - accuracy: 0.0000e+00 - val_loss: 0.0202 - val_accuracy: 0.0037 - lr: 0.0010 - 657ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.02024
90/90 - 1s - loss: 0.0153 - accuracy: 0.0000e+00 - val_loss: 0.0233 - val_accuracy: 0.0037 - lr: 0.0010 - 649ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.02024
90/90 - 1s - loss: 0.0130 - accuracy: 0.0000e+00 - val_loss: 0.0227 - val_accuracy: 0.0037 - lr: 0.0010 - 673ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.02024
90/90 - 1s - loss: 0.0138 - accuracy: 0.0000e+00 - val_loss: 0.0248 - val_accuracy: 0.0037 - lr: 0.0010 - 649ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.02024
90/90 - 1s - loss: 0.0123 - accuracy: 0.0000e+00 - val_loss: 0.0261 - val_accuracy: 0.0037 - lr: 0.0010 - 654ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00012: val_loss did not improve from 0.02024
90/90 - 1s - loss: 0.0136 - accuracy: 0.0000e+00 - val_loss: 0.0275 - val_accuracy: 0.0037 - lr: 0.0010 - 646ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.02024 to 0.00621, saving model to LSTM2.h5
90/90 - 1s - loss: 0.0189 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 667ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00621
90/90 - 1s - loss: 0.0044 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 652ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00621
90/90 - 1s - loss: 0.0032 - accuracy: 0.0000e+00 - val_loss: 0.0116 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 697ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00621
90/90 - 1s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0148 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 703ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00621
90/90 - 1s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0181 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 656ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00018: val_loss did not improve from 0.00621
90/90 - 1s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0212 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 656ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00621
90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0213 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 661ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00621
90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0214 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 641ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00621
90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0216 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 640ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00621
90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0219 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 654ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00023: val_loss did not improve from 0.00621
90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0222 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 645ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00621
90/90 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0225 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 653ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00621
90/90 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0228 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 700ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00621
90/90 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0232 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 638ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00621
90/90 - 1s - loss: 9.9870e-04 - accuracy: 0.0000e+00 - val_loss: 0.0235 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 630ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00621
90/90 - 1s - loss: 9.8609e-04 - accuracy: 0.0000e+00 - val_loss: 0.0239 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 709ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00621
90/90 - 1s - loss: 9.7384e-04 - accuracy: 0.0000e+00 - val_loss: 0.0243 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 643ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00621
90/90 - 1s - loss: 9.6197e-04 - accuracy: 0.0000e+00 - val_loss: 0.0247 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 632ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00621
90/90 - 1s - loss: 9.5052e-04 - accuracy: 0.0000e+00 - val_loss: 0.0251 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 638ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00621
90/90 - 1s - loss: 9.3949e-04 - accuracy: 0.0000e+00 - val_loss: 0.0256 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 637ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00621
90/90 - 1s - loss: 9.2892e-04 - accuracy: 0.0000e+00 - val_loss: 0.0260 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 641ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00621
90/90 - 1s - loss: 9.1882e-04 - accuracy: 0.0000e+00 - val_loss: 0.0264 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 655ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00621
90/90 - 1s - loss: 9.0920e-04 - accuracy: 0.0000e+00 - val_loss: 0.0269 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 685ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00621
90/90 - 1s - loss: 9.0007e-04 - accuracy: 0.0000e+00 - val_loss: 0.0273 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 636ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00621
90/90 - 1s - loss: 8.9143e-04 - accuracy: 0.0000e+00 - val_loss: 0.0278 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 636ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00621
90/90 - 1s - loss: 8.8328e-04 - accuracy: 0.0000e+00 - val_loss: 0.0283 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 641ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00621
90/90 - 1s - loss: 8.7561e-04 - accuracy: 0.0000e+00 - val_loss: 0.0287 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 648ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00621
90/90 - 1s - loss: 8.6840e-04 - accuracy: 0.0000e+00 - val_loss: 0.0292 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 647ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00621
90/90 - 1s - loss: 8.6166e-04 - accuracy: 0.0000e+00 - val_loss: 0.0297 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 634ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00621
90/90 - 1s - loss: 8.5535e-04 - accuracy: 0.0000e+00 - val_loss: 0.0302 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 690ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00621
90/90 - 1s - loss: 8.4946e-04 - accuracy: 0.0000e+00 - val_loss: 0.0306 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 681ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00621
90/90 - 1s - loss: 8.4396e-04 - accuracy: 0.0000e+00 - val_loss: 0.0311 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 635ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00621
90/90 - 1s - loss: 8.3883e-04 - accuracy: 0.0000e+00 - val_loss: 0.0316 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 646ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00621
90/90 - 1s - loss: 8.3404e-04 - accuracy: 0.0000e+00 - val_loss: 0.0320 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 659ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00621
90/90 - 1s - loss: 8.2956e-04 - accuracy: 0.0000e+00 - val_loss: 0.0325 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 645ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00621
90/90 - 1s - loss: 8.2537e-04 - accuracy: 0.0000e+00 - val_loss: 0.0329 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 655ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00621
90/90 - 1s - loss: 8.2145e-04 - accuracy: 0.0000e+00 - val_loss: 0.0333 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 658ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00621
90/90 - 1s - loss: 8.1776e-04 - accuracy: 0.0000e+00 - val_loss: 0.0338 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 685ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00621
90/90 - 1s - loss: 8.1429e-04 - accuracy: 0.0000e+00 - val_loss: 0.0342 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 647ms/epoch - 7ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00621
90/90 - 1s - loss: 8.1101e-04 - accuracy: 0.0000e+00 - val_loss: 0.0346 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 659ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00621
90/90 - 1s - loss: 8.0789e-04 - accuracy: 0.0000e+00 - val_loss: 0.0350 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 646ms/epoch - 7ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00621
90/90 - 1s - loss: 8.0493e-04 - accuracy: 0.0000e+00 - val_loss: 0.0353 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 650ms/epoch - 7ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00621
90/90 - 1s - loss: 8.0210e-04 - accuracy: 0.0000e+00 - val_loss: 0.0357 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 634ms/epoch - 7ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00621
90/90 - 1s - loss: 7.9939e-04 - accuracy: 0.0000e+00 - val_loss: 0.0360 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 650ms/epoch - 7ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00621
90/90 - 1s - loss: 7.9679e-04 - accuracy: 0.0000e+00 - val_loss: 0.0364 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 665ms/epoch - 7ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00621
90/90 - 1s - loss: 7.9427e-04 - accuracy: 0.0000e+00 - val_loss: 0.0367 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 629ms/epoch - 7ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00621
90/90 - 1s - loss: 7.9184e-04 - accuracy: 0.0000e+00 - val_loss: 0.0370 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 650ms/epoch - 7ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00621
90/90 - 1s - loss: 7.8949e-04 - accuracy: 0.0000e+00 - val_loss: 0.0372 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 670ms/epoch - 7ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00621
90/90 - 1s - loss: 7.8720e-04 - accuracy: 0.0000e+00 - val_loss: 0.0375 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 643ms/epoch - 7ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00621
90/90 - 1s - loss: 7.8497e-04 - accuracy: 0.0000e+00 - val_loss: 0.0377 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 637ms/epoch - 7ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00621
90/90 - 1s - loss: 7.8280e-04 - accuracy: 0.0000e+00 - val_loss: 0.0379 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 662ms/epoch - 7ms/step
Epoch 00063: early stopping
SMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 53.7298089742155 
RMSE:	 7.330062003435954 
MAPE:	 5.9334980672321835

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 55.817061136968896 
RMSE:	 7.4710816577634125 
MAPE:	 6.256840285842

WMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 59.16871259968053 
RMSE:	 7.692120162847206 
MAPE:	 6.209776446209727

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 121.68296982011337 
RMSE:	 11.031000399787564 
MAPE:	 9.966678894256134

KAMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 58.818484868395416 
RMSE:	 7.669321017430123 
MAPE:	 6.375419548070835

MIDPOINT
Prediction vs Close:		51.12% Accuracy
Prediction vs Prediction:	44.4% Accuracy
MSE:	 65.76580507888394 
RMSE:	 8.109611894467204 
MAPE:	 6.6791438644890375

T3
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 100.97165975835705 
RMSE:	 10.048465542477471 
MAPE:	 8.008063201476219

TEMA
Prediction vs Close:		50.75% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 52.954444549979364 
RMSE:	 7.2769804555172035 
MAPE:	 6.363917319886005
Runtime: mins: 16.28943213206667

Architecture Used

In [ ]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving Experiment2.png to Experiment2 (2).png
In [ ]:
img = cv2.imread('Experiment2.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[ ]:
<matplotlib.image.AxesImage at 0x7f4cbfedfad0>

Model Plots

In [159]:
with open('simulation2_data.json') as json_file:
    simulation2 = json.load(json_file)
fileimg = 'Experiment2'
In [160]:
for i in range(len(list(simulation2.keys()))):
  SIM = list(simulation2.keys())[i]
  plot_train(simulation2,SIM)
  plot_test(simulation2,SIM)
----- Train RMSE for SMA ----- 8.883824208650845
----- Train_MSE_LSTM for SMA ----- 78.92233257021081
----- Train MAE LSTM for SMA ----- 7.759502069503622
----- Test RMSE for SMA----- 7.330062003435954
----- Test_MSE_LSTM for SMA----- 53.7298089742155
----- Test_MAE_LSTM for SMA----- 5.9334980672321835
----- Train RMSE for EMA ----- 10.178751115321669
----- Train_MSE_LSTM for EMA ----- 103.60697426766212
----- Train MAE LSTM for EMA ----- 8.999046714729769
----- Test RMSE for EMA----- 7.4710816577634125
----- Test_MSE_LSTM for EMA----- 55.817061136968896
----- Test_MAE_LSTM for EMA----- 6.256840285842
----- Train RMSE for WMA ----- 10.488755425217372
----- Train_MSE_LSTM for WMA ----- 110.01399037002683
----- Train MAE LSTM for WMA ----- 9.340163706061846
----- Test RMSE for WMA----- 7.692120162847206
----- Test_MSE_LSTM for WMA----- 59.16871259968053
----- Test_MAE_LSTM for WMA----- 6.209776446209727
----- Train RMSE for DEMA ----- 12.116814300802416
----- Train_MSE_LSTM for DEMA ----- 146.8171888001299
----- Train MAE LSTM for DEMA ----- 10.890864077063352
----- Test RMSE for DEMA----- 11.031000399787564
----- Test_MSE_LSTM for DEMA----- 121.68296982011337
----- Test_MAE_LSTM for DEMA----- 9.966678894256134
----- Train RMSE for KAMA ----- 10.567068166180624
----- Train_MSE_LSTM for KAMA ----- 111.66292962870791
----- Train MAE LSTM for KAMA ----- 9.505079388517037
----- Test RMSE for KAMA----- 7.669321017430123
----- Test_MSE_LSTM for KAMA----- 58.818484868395416
----- Test_MAE_LSTM for KAMA----- 6.375419548070835
----- Train RMSE for MIDPOINT ----- 9.436849912992995
----- Train_MSE_LSTM for MIDPOINT ----- 89.05413628035592
----- Train MAE LSTM for MIDPOINT ----- 8.38001328239812
----- Test RMSE for MIDPOINT----- 8.109611894467204
----- Test_MSE_LSTM for MIDPOINT----- 65.76580507888394
----- Test_MAE_LSTM for MIDPOINT----- 6.6791438644890375
----- Train RMSE for T3 ----- 12.018635753983755
----- Train_MSE_LSTM for T3 ----- 144.44760538693666
----- Train MAE LSTM for T3 ----- 10.802987882431763
----- Test RMSE for T3----- 10.048465542477471
----- Test_MSE_LSTM for T3----- 100.97165975835705
----- Test_MAE_LSTM for T3----- 8.008063201476219
----- Train RMSE for TEMA ----- 7.4420039211189355
----- Train_MSE_LSTM for TEMA ----- 55.383422361949606
----- Train MAE LSTM for TEMA ----- 5.1575921766510655
----- Test RMSE for TEMA----- 7.2769804555172035
----- Test_MSE_LSTM for TEMA----- 52.954444549979364
----- Test_MAE_LSTM for TEMA----- 6.363917319886005

Univariate Arima Multistep MutiVariate LSTM Hybrid Model Experiment 3

In [ ]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det = 20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # # Option 1
    # # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    # model.add(Dense(units=64,activation='relu'))
    # model.add(Dropout(0.5))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    # ## Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()


    # # option 2
    # model = Sequential()
    # model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    # model.add(Dense(64))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()




    # Option 3
    # define custom activation
    # 
    class Double_Tanh(Activation):
        def __init__(self, activation, **kwargs):
            super(Double_Tanh, self).__init__(activation, **kwargs)
            self.__name__ = 'double_tanh'

    def double_tanh(x):
        return (K.tanh(x) * 2)

    get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
        # Model Generation
    model = Sequential()
    #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    model.add(Dense(1))
    model.add(Activation(double_tanh))
    model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM3.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
    # model.add(LSTM(units=int(lstm_len/2)))
    # model.add(Dense(1, activation='sigmoid'))
    # model.compile(loss='mean_squared_error', optimizer='adam')
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [ ]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation3 = {}
    imgfile = 'Experiment3'
    for ma in optimized_period:
              print(ma)
              print(functions[ma])
              print ( int( optimized_period[ma]))
            # if ma == 'SMA':
              low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
              low_vol = low_vol.fillna(0)
              low_vol_data = df['close']
              high_vol = pd.DataFrame()
              df2 = df.copy()
              for i in df2.columns:
                if i in low_vol.columns:
                  high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
              high_vol_data = df['close']
              ## *****************************************************
              # Generate ARIMA and LSTM predictions
              print('\nWorking on ' + ma + ' predictions')
              try:
                print('parameters used : ', train_len, test_len)
                low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
              except:
                  print('ARIMA error, skipping to next MA type')
                  continue
              Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
              final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
              mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
              rmse_ftr = mse_ftr ** 0.5
              mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
              mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

              final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
              mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
              rmse = mse ** 0.5
              mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              # Generate prediction accuracy
              actual = df['close'].tail(test_len).values
              result_1 = []
              result_2 = []
              for i in range(1, len(final_prediction)):
                  # Compare prediction to previous close price
                  if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                      result_1.append(1)
                  elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                      result_1.append(1)
                  else:
                      result_1.append(0)

                  # Compare prediction to previous prediction
                  if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                      result_2.append(1)
                  elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                      result_2.append(1)
                  else:
                      result_2.append(0)

              accuracy_1 = np.mean(result_1)
              accuracy_2 = np.mean(result_2)

              simulation3[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                            'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                            'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                            'rmse': rmse_ftr, 'mae' : mae_ftr},
                                'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                          'rmse': rmse, 'mae': mae },
                                'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

              # save simulation data here as checkpoint
              with open('simulation3_data.json', 'w') as fp:
                  json.dump(simulation3, fp)

              for ma in simulation3.keys():
                  print('\n' + ma)
                  print('Prediction vs Close:\t\t' + str(round(100*simulation3[ma]['accuracy']['prediction vs close'], 2))
                        + '% Accuracy')
                  print('Prediction vs Prediction:\t' + str(round(100*simulation3[ma]['accuracy']['prediction vs prediction'], 2))
                        + '% Accuracy')
                  print('MSE:\t', simulation3[ma]['final']['mse'],
                        '\nRMSE:\t', simulation3[ma]['final']['rmse'],
                        '\nMAPE:\t', simulation3[ma]['final']['mae'])#,
                        # '\nMAPE:\t', simulation[ma]['final']['mape'])
            # else:
            #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.67 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4157.020, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3687.148, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.31 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3458.651, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3322.133, Time=0.12 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.00 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.07 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3324.133, Time=0.27 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.647 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1657.067
Date:                Sun, 12 Dec 2021   AIC                           3322.133
Time:                        19:28:48   BIC                           3340.897
Sample:                             0   HQIC                          3329.339
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1966      0.003   -387.226      0.000      -1.203      -1.191
ar.L2         -0.8952      0.006   -138.692      0.000      -0.908      -0.883
ar.L3         -0.3968      0.006    -68.284      0.000      -0.408      -0.385
sigma2         3.5858      0.017    214.535      0.000       3.553       3.619
===================================================================================
Ljung-Box (L1) (Q):                  14.47   Jarque-Bera (JB):           2428881.42
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       271.99
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.24644, saving model to LSTM3.h5
48/48 - 3s - loss: 0.1403 - mse: 0.1403 - mae: 0.2668 - val_loss: 0.2464 - val_mse: 0.2464 - val_mae: 0.4499 - lr: 0.0010 - 3s/epoch - 66ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.24644 to 0.22849, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0135 - mse: 0.0135 - mae: 0.0924 - val_loss: 0.2285 - val_mse: 0.2285 - val_mae: 0.4307 - lr: 0.0010 - 297ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.22849 to 0.21760, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0120 - mse: 0.0120 - mae: 0.0866 - val_loss: 0.2176 - val_mse: 0.2176 - val_mae: 0.4191 - lr: 0.0010 - 324ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.21760 to 0.19984, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0111 - mse: 0.0111 - mae: 0.0828 - val_loss: 0.1998 - val_mse: 0.1998 - val_mae: 0.4002 - lr: 0.0010 - 300ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.19984
48/48 - 0s - loss: 0.0096 - mse: 0.0096 - mae: 0.0779 - val_loss: 0.2061 - val_mse: 0.2061 - val_mae: 0.4086 - lr: 0.0010 - 278ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.19984
48/48 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0799 - val_loss: 0.2000 - val_mse: 0.2000 - val_mae: 0.4032 - lr: 0.0010 - 296ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.19984 to 0.19355, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0094 - mse: 0.0094 - mae: 0.0765 - val_loss: 0.1935 - val_mse: 0.1935 - val_mae: 0.3967 - lr: 0.0010 - 309ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.19355 to 0.18682, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0728 - val_loss: 0.1868 - val_mse: 0.1868 - val_mae: 0.3900 - lr: 0.0010 - 305ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.18682 to 0.18411, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0732 - val_loss: 0.1841 - val_mse: 0.1841 - val_mae: 0.3879 - lr: 0.0010 - 309ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.18411 to 0.17953, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0787 - val_loss: 0.1795 - val_mse: 0.1795 - val_mae: 0.3831 - lr: 0.0010 - 329ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.17953 to 0.17132, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0103 - mse: 0.0103 - mae: 0.0797 - val_loss: 0.1713 - val_mse: 0.1713 - val_mae: 0.3735 - lr: 0.0010 - 301ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.17132 to 0.16625, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0739 - val_loss: 0.1663 - val_mse: 0.1663 - val_mae: 0.3674 - lr: 0.0010 - 295ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.16625
48/48 - 0s - loss: 0.0108 - mse: 0.0108 - mae: 0.0830 - val_loss: 0.1748 - val_mse: 0.1748 - val_mae: 0.3783 - lr: 0.0010 - 313ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.16625 to 0.16014, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0843 - val_loss: 0.1601 - val_mse: 0.1601 - val_mae: 0.3598 - lr: 0.0010 - 309ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.16014 to 0.15651, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0829 - val_loss: 0.1565 - val_mse: 0.1565 - val_mae: 0.3558 - lr: 0.0010 - 299ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.15651 to 0.14952, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0108 - mse: 0.0108 - mae: 0.0829 - val_loss: 0.1495 - val_mse: 0.1495 - val_mae: 0.3463 - lr: 0.0010 - 325ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.14952 to 0.14320, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0097 - mse: 0.0097 - mae: 0.0798 - val_loss: 0.1432 - val_mse: 0.1432 - val_mae: 0.3374 - lr: 0.0010 - 297ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss improved from 0.14320 to 0.13904, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0095 - mse: 0.0095 - mae: 0.0780 - val_loss: 0.1390 - val_mse: 0.1390 - val_mae: 0.3310 - lr: 0.0010 - 294ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.13904
48/48 - 0s - loss: 0.0096 - mse: 0.0096 - mae: 0.0795 - val_loss: 0.1461 - val_mse: 0.1461 - val_mae: 0.3405 - lr: 0.0010 - 302ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.13904
48/48 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0778 - val_loss: 0.1609 - val_mse: 0.1609 - val_mae: 0.3602 - lr: 0.0010 - 298ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss improved from 0.13904 to 0.13819, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0761 - val_loss: 0.1382 - val_mse: 0.1382 - val_mae: 0.3290 - lr: 0.0010 - 300ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.13819
48/48 - 0s - loss: 0.0094 - mse: 0.0094 - mae: 0.0777 - val_loss: 0.1406 - val_mse: 0.1406 - val_mae: 0.3321 - lr: 0.0010 - 285ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss improved from 0.13819 to 0.12750, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0770 - val_loss: 0.1275 - val_mse: 0.1275 - val_mae: 0.3135 - lr: 0.0010 - 332ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.12750
48/48 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0765 - val_loss: 0.1355 - val_mse: 0.1355 - val_mae: 0.3250 - lr: 0.0010 - 261ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.12750
48/48 - 0s - loss: 0.0086 - mse: 0.0086 - mae: 0.0751 - val_loss: 0.1311 - val_mse: 0.1311 - val_mae: 0.3183 - lr: 0.0010 - 284ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.12750
48/48 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0737 - val_loss: 0.1372 - val_mse: 0.1372 - val_mae: 0.3262 - lr: 0.0010 - 306ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.12750
48/48 - 0s - loss: 0.0084 - mse: 0.0084 - mae: 0.0742 - val_loss: 0.1319 - val_mse: 0.1319 - val_mae: 0.3188 - lr: 0.0010 - 293ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00028: val_loss did not improve from 0.12750
48/48 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0724 - val_loss: 0.1350 - val_mse: 0.1350 - val_mae: 0.3227 - lr: 0.0010 - 297ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss improved from 0.12750 to 0.11376, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0119 - mse: 0.0119 - mae: 0.0889 - val_loss: 0.1138 - val_mse: 0.1138 - val_mae: 0.2911 - lr: 1.0000e-04 - 322ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss improved from 0.11376 to 0.10768, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0583 - val_loss: 0.1077 - val_mse: 0.1077 - val_mae: 0.2814 - lr: 1.0000e-04 - 318ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss improved from 0.10768 to 0.10767, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0519 - val_loss: 0.1077 - val_mse: 0.1077 - val_mae: 0.2813 - lr: 1.0000e-04 - 298ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0508 - val_loss: 0.1100 - val_mse: 0.1100 - val_mae: 0.2849 - lr: 1.0000e-04 - 302ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0502 - val_loss: 0.1116 - val_mse: 0.1116 - val_mae: 0.2873 - lr: 1.0000e-04 - 305ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0477 - val_loss: 0.1124 - val_mse: 0.1124 - val_mae: 0.2886 - lr: 1.0000e-04 - 264ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00035: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0510 - val_loss: 0.1127 - val_mse: 0.1127 - val_mae: 0.2889 - lr: 1.0000e-04 - 296ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0493 - val_loss: 0.1127 - val_mse: 0.1127 - val_mae: 0.2889 - lr: 1.0000e-05 - 309ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0502 - val_loss: 0.1130 - val_mse: 0.1130 - val_mae: 0.2894 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0495 - val_loss: 0.1132 - val_mse: 0.1132 - val_mae: 0.2897 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0470 - val_loss: 0.1132 - val_mse: 0.1132 - val_mae: 0.2897 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00040: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0493 - val_loss: 0.1132 - val_mse: 0.1132 - val_mae: 0.2897 - lr: 1.0000e-05 - 320ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0507 - val_loss: 0.1133 - val_mse: 0.1133 - val_mae: 0.2898 - lr: 1.0000e-05 - 282ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0508 - val_loss: 0.1134 - val_mse: 0.1134 - val_mae: 0.2900 - lr: 1.0000e-05 - 297ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0511 - val_loss: 0.1135 - val_mse: 0.1135 - val_mae: 0.2901 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0498 - val_loss: 0.1135 - val_mse: 0.1135 - val_mae: 0.2901 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0478 - val_loss: 0.1138 - val_mse: 0.1138 - val_mae: 0.2906 - lr: 1.0000e-05 - 298ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0473 - val_loss: 0.1140 - val_mse: 0.1140 - val_mae: 0.2909 - lr: 1.0000e-05 - 315ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0506 - val_loss: 0.1143 - val_mse: 0.1143 - val_mae: 0.2914 - lr: 1.0000e-05 - 288ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0493 - val_loss: 0.1145 - val_mse: 0.1145 - val_mae: 0.2916 - lr: 1.0000e-05 - 286ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0464 - val_loss: 0.1148 - val_mse: 0.1148 - val_mae: 0.2922 - lr: 1.0000e-05 - 301ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0475 - val_loss: 0.1150 - val_mse: 0.1150 - val_mae: 0.2924 - lr: 1.0000e-05 - 295ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0476 - val_loss: 0.1151 - val_mse: 0.1151 - val_mae: 0.2926 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0478 - val_loss: 0.1151 - val_mse: 0.1151 - val_mae: 0.2925 - lr: 1.0000e-05 - 309ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0493 - val_loss: 0.1151 - val_mse: 0.1151 - val_mae: 0.2925 - lr: 1.0000e-05 - 309ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0502 - val_loss: 0.1152 - val_mse: 0.1152 - val_mae: 0.2927 - lr: 1.0000e-05 - 300ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0492 - val_loss: 0.1155 - val_mse: 0.1155 - val_mae: 0.2931 - lr: 1.0000e-05 - 303ms/epoch - 6ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0487 - val_loss: 0.1156 - val_mse: 0.1156 - val_mae: 0.2933 - lr: 1.0000e-05 - 386ms/epoch - 8ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0514 - val_loss: 0.1156 - val_mse: 0.1156 - val_mae: 0.2932 - lr: 1.0000e-05 - 288ms/epoch - 6ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0483 - val_loss: 0.1155 - val_mse: 0.1155 - val_mae: 0.2931 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0473 - val_loss: 0.1155 - val_mse: 0.1155 - val_mae: 0.2930 - lr: 1.0000e-05 - 306ms/epoch - 6ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0491 - val_loss: 0.1155 - val_mse: 0.1155 - val_mae: 0.2931 - lr: 1.0000e-05 - 312ms/epoch - 6ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0483 - val_loss: 0.1154 - val_mse: 0.1154 - val_mae: 0.2929 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0472 - val_loss: 0.1157 - val_mse: 0.1157 - val_mae: 0.2933 - lr: 1.0000e-05 - 307ms/epoch - 6ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0507 - val_loss: 0.1157 - val_mse: 0.1157 - val_mae: 0.2934 - lr: 1.0000e-05 - 291ms/epoch - 6ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0494 - val_loss: 0.1157 - val_mse: 0.1157 - val_mae: 0.2934 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0479 - val_loss: 0.1158 - val_mse: 0.1158 - val_mae: 0.2935 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0503 - val_loss: 0.1158 - val_mse: 0.1158 - val_mae: 0.2935 - lr: 1.0000e-05 - 294ms/epoch - 6ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0496 - val_loss: 0.1160 - val_mse: 0.1160 - val_mae: 0.2938 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0500 - val_loss: 0.1162 - val_mse: 0.1162 - val_mae: 0.2940 - lr: 1.0000e-05 - 302ms/epoch - 6ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0473 - val_loss: 0.1164 - val_mse: 0.1164 - val_mae: 0.2943 - lr: 1.0000e-05 - 291ms/epoch - 6ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0490 - val_loss: 0.1163 - val_mse: 0.1163 - val_mae: 0.2942 - lr: 1.0000e-05 - 290ms/epoch - 6ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0493 - val_loss: 0.1166 - val_mse: 0.1166 - val_mae: 0.2947 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0495 - val_loss: 0.1167 - val_mse: 0.1167 - val_mae: 0.2948 - lr: 1.0000e-05 - 307ms/epoch - 6ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0491 - val_loss: 0.1169 - val_mse: 0.1169 - val_mae: 0.2950 - lr: 1.0000e-05 - 309ms/epoch - 6ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0502 - val_loss: 0.1172 - val_mse: 0.1172 - val_mae: 0.2956 - lr: 1.0000e-05 - 300ms/epoch - 6ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0479 - val_loss: 0.1175 - val_mse: 0.1175 - val_mae: 0.2959 - lr: 1.0000e-05 - 315ms/epoch - 7ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0465 - val_loss: 0.1179 - val_mse: 0.1179 - val_mae: 0.2966 - lr: 1.0000e-05 - 300ms/epoch - 6ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0489 - val_loss: 0.1179 - val_mse: 0.1179 - val_mae: 0.2965 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0488 - val_loss: 0.1179 - val_mse: 0.1179 - val_mae: 0.2966 - lr: 1.0000e-05 - 297ms/epoch - 6ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0476 - val_loss: 0.1183 - val_mse: 0.1183 - val_mae: 0.2971 - lr: 1.0000e-05 - 280ms/epoch - 6ms/step
Epoch 80/500

Epoch 00080: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0478 - val_loss: 0.1184 - val_mse: 0.1184 - val_mae: 0.2973 - lr: 1.0000e-05 - 301ms/epoch - 6ms/step
Epoch 81/500

Epoch 00081: val_loss did not improve from 0.10767
48/48 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0484 - val_loss: 0.1186 - val_mse: 0.1186 - val_mae: 0.2975 - lr: 1.0000e-05 - 297ms/epoch - 6ms/step
Epoch 00081: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 18.67415384478757 
RMSE:	 4.321360184570081 
MAPE:	 3.534296685764838
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.55 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4231.556, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3761.238, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.37 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3532.227, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3394.496, Time=0.12 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.11 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.83 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3396.496, Time=0.26 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.442 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1693.248
Date:                Sun, 12 Dec 2021   AIC                           3394.496
Time:                        19:30:44   BIC                           3413.260
Sample:                             0   HQIC                          3401.702
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.569      0.000      -1.204      -1.192
ar.L2         -0.8976      0.006   -139.811      0.000      -0.910      -0.885
ar.L3         -0.3984      0.006    -68.662      0.000      -0.410      -0.387
sigma2         3.9230      0.018    215.372      0.000       3.887       3.959
===================================================================================
Ljung-Box (L1) (Q):                  14.54   Jarque-Bera (JB):           2462173.05
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.82
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.02426, saving model to LSTM3.h5
16/16 - 3s - loss: 0.1273 - mse: 0.1273 - mae: 0.2574 - val_loss: 0.0243 - val_mse: 0.0243 - val_mae: 0.1327 - lr: 0.0010 - 3s/epoch - 179ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.02426 to 0.01915, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0468 - mse: 0.0468 - mae: 0.1827 - val_loss: 0.0192 - val_mse: 0.0192 - val_mae: 0.1203 - lr: 0.0010 - 140ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.01915 to 0.01717, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0185 - mse: 0.0185 - mae: 0.1081 - val_loss: 0.0172 - val_mse: 0.0172 - val_mae: 0.1138 - lr: 0.0010 - 127ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.01717
16/16 - 0s - loss: 0.0181 - mse: 0.0181 - mae: 0.1068 - val_loss: 0.0186 - val_mse: 0.0186 - val_mae: 0.1162 - lr: 0.0010 - 117ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.01717 to 0.01666, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0141 - mse: 0.0141 - mae: 0.0964 - val_loss: 0.0167 - val_mse: 0.0167 - val_mae: 0.1098 - lr: 0.0010 - 126ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.01666 to 0.01635, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0124 - mse: 0.0124 - mae: 0.0898 - val_loss: 0.0164 - val_mse: 0.0164 - val_mae: 0.1079 - lr: 0.0010 - 129ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.01635 to 0.01605, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0810 - val_loss: 0.0161 - val_mse: 0.0161 - val_mae: 0.1063 - lr: 0.0010 - 147ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.01605 to 0.01425, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0824 - val_loss: 0.0143 - val_mse: 0.0143 - val_mae: 0.1002 - lr: 0.0010 - 143ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.01425
16/16 - 0s - loss: 0.0101 - mse: 0.0101 - mae: 0.0801 - val_loss: 0.0145 - val_mse: 0.0145 - val_mae: 0.1004 - lr: 0.0010 - 128ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.01425 to 0.01280, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0771 - val_loss: 0.0128 - val_mse: 0.0128 - val_mae: 0.0945 - lr: 0.0010 - 139ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.01280 to 0.01188, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0745 - val_loss: 0.0119 - val_mse: 0.0119 - val_mae: 0.0909 - lr: 0.0010 - 137ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.01188 to 0.01139, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0762 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0891 - lr: 0.0010 - 136ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.01139 to 0.01046, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0715 - val_loss: 0.0105 - val_mse: 0.0105 - val_mae: 0.0858 - lr: 0.0010 - 132ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.01046 to 0.01008, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0687 - val_loss: 0.0101 - val_mse: 0.0101 - val_mae: 0.0839 - lr: 0.0010 - 151ms/epoch - 9ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.01008 to 0.00984, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0743 - val_loss: 0.0098 - val_mse: 0.0098 - val_mae: 0.0829 - lr: 0.0010 - 149ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.00984 to 0.00980, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0691 - val_loss: 0.0098 - val_mse: 0.0098 - val_mae: 0.0826 - lr: 0.0010 - 149ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.00980 to 0.00976, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0664 - val_loss: 0.0098 - val_mse: 0.0098 - val_mae: 0.0822 - lr: 0.0010 - 134ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0663 - val_loss: 0.0107 - val_mse: 0.0107 - val_mae: 0.0844 - lr: 0.0010 - 119ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0647 - val_loss: 0.0110 - val_mse: 0.0110 - val_mae: 0.0853 - lr: 0.0010 - 105ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00020: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0598 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0888 - lr: 0.0010 - 107ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0598 - val_loss: 0.0119 - val_mse: 0.0119 - val_mae: 0.0877 - lr: 1.0000e-04 - 123ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0591 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0868 - lr: 1.0000e-04 - 125ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0599 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0867 - lr: 1.0000e-04 - 125ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0601 - val_loss: 0.0117 - val_mse: 0.0117 - val_mae: 0.0871 - lr: 1.0000e-04 - 120ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00025: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0582 - val_loss: 0.0118 - val_mse: 0.0118 - val_mae: 0.0875 - lr: 1.0000e-04 - 122ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0599 - val_loss: 0.0118 - val_mse: 0.0118 - val_mae: 0.0875 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0563 - val_loss: 0.0118 - val_mse: 0.0118 - val_mae: 0.0875 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0593 - val_loss: 0.0118 - val_mse: 0.0118 - val_mae: 0.0875 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0583 - val_loss: 0.0118 - val_mse: 0.0118 - val_mae: 0.0876 - lr: 1.0000e-05 - 120ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00030: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0581 - val_loss: 0.0118 - val_mse: 0.0118 - val_mae: 0.0876 - lr: 1.0000e-05 - 123ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0578 - val_loss: 0.0119 - val_mse: 0.0119 - val_mae: 0.0876 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0575 - val_loss: 0.0119 - val_mse: 0.0119 - val_mae: 0.0876 - lr: 1.0000e-05 - 121ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0605 - val_loss: 0.0119 - val_mse: 0.0119 - val_mae: 0.0877 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0584 - val_loss: 0.0119 - val_mse: 0.0119 - val_mae: 0.0877 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0578 - val_loss: 0.0119 - val_mse: 0.0119 - val_mae: 0.0878 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0563 - val_loss: 0.0120 - val_mse: 0.0120 - val_mae: 0.0879 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0561 - val_loss: 0.0120 - val_mse: 0.0120 - val_mae: 0.0879 - lr: 1.0000e-05 - 138ms/epoch - 9ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0588 - val_loss: 0.0120 - val_mse: 0.0120 - val_mae: 0.0880 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0591 - val_loss: 0.0120 - val_mse: 0.0120 - val_mae: 0.0880 - lr: 1.0000e-05 - 120ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0581 - val_loss: 0.0120 - val_mse: 0.0120 - val_mae: 0.0881 - lr: 1.0000e-05 - 127ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0591 - val_loss: 0.0120 - val_mse: 0.0120 - val_mae: 0.0881 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0580 - val_loss: 0.0120 - val_mse: 0.0120 - val_mae: 0.0881 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0563 - val_loss: 0.0121 - val_mse: 0.0121 - val_mae: 0.0882 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0590 - val_loss: 0.0121 - val_mse: 0.0121 - val_mae: 0.0882 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0574 - val_loss: 0.0121 - val_mse: 0.0121 - val_mae: 0.0882 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0558 - val_loss: 0.0121 - val_mse: 0.0121 - val_mae: 0.0882 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0572 - val_loss: 0.0121 - val_mse: 0.0121 - val_mae: 0.0883 - lr: 1.0000e-05 - 125ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0559 - val_loss: 0.0121 - val_mse: 0.0121 - val_mae: 0.0884 - lr: 1.0000e-05 - 123ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0553 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0884 - lr: 1.0000e-05 - 120ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0605 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0885 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0593 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0885 - lr: 1.0000e-05 - 122ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0583 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0886 - lr: 1.0000e-05 - 122ms/epoch - 8ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0576 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0886 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0590 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0886 - lr: 1.0000e-05 - 122ms/epoch - 8ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0611 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0886 - lr: 1.0000e-05 - 138ms/epoch - 9ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0572 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0886 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0569 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0886 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0582 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0886 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0581 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0887 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0588 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0887 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0579 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0887 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0609 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0887 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0565 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0888 - lr: 1.0000e-05 - 121ms/epoch - 8ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0561 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0889 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0576 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0889 - lr: 1.0000e-05 - 122ms/epoch - 8ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0563 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0890 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.00976
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0562 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0890 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 00067: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 18.67415384478757 
RMSE:	 4.321360184570081 
MAPE:	 3.534296685764838

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 55.32374016164833 
RMSE:	 7.437993019736462 
MAPE:	 6.054411328729787
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.56 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4264.089, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3793.930, Time=0.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.31 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3564.923, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3427.258, Time=0.12 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.74 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.62 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3429.258, Time=0.23 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.770 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1709.629
Date:                Sun, 12 Dec 2021   AIC                           3427.258
Time:                        19:32:25   BIC                           3446.021
Sample:                             0   HQIC                          3434.464
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1981      0.003   -389.386      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.699      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.737      0.000      -0.410      -0.387
sigma2         4.0860      0.019    215.311      0.000       4.049       4.123
===================================================================================
Ljung-Box (L1) (Q):                  14.57   Jarque-Bera (JB):           2460901.70
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.75
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.68905, saving model to LSTM3.h5
17/17 - 3s - loss: 0.3301 - mse: 0.3301 - mae: 0.4415 - val_loss: 0.6891 - val_mse: 0.6891 - val_mae: 0.7897 - lr: 0.0010 - 3s/epoch - 170ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.68905 to 0.61028, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0979 - mse: 0.0979 - mae: 0.2733 - val_loss: 0.6103 - val_mse: 0.6103 - val_mae: 0.7413 - lr: 0.0010 - 125ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.61028 to 0.50641, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0334 - mse: 0.0334 - mae: 0.1444 - val_loss: 0.5064 - val_mse: 0.5064 - val_mae: 0.6711 - lr: 0.0010 - 128ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.50641 to 0.32715, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0328 - mse: 0.0328 - mae: 0.1433 - val_loss: 0.3272 - val_mse: 0.3272 - val_mae: 0.5261 - lr: 0.0010 - 148ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.32715 to 0.25986, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0234 - mse: 0.0234 - mae: 0.1210 - val_loss: 0.2599 - val_mse: 0.2599 - val_mae: 0.4618 - lr: 0.0010 - 139ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.25986 to 0.19991, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0219 - mse: 0.0219 - mae: 0.1162 - val_loss: 0.1999 - val_mse: 0.1999 - val_mae: 0.3958 - lr: 0.0010 - 152ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.19991 to 0.16506, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0191 - mse: 0.0191 - mae: 0.1110 - val_loss: 0.1651 - val_mse: 0.1651 - val_mae: 0.3538 - lr: 0.0010 - 146ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.16506 to 0.14332, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0178 - mse: 0.0178 - mae: 0.1061 - val_loss: 0.1433 - val_mse: 0.1433 - val_mae: 0.3254 - lr: 0.0010 - 193ms/epoch - 11ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.14332 to 0.13145, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0161 - mse: 0.0161 - mae: 0.1020 - val_loss: 0.1315 - val_mse: 0.1315 - val_mae: 0.3092 - lr: 0.0010 - 136ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.13145 to 0.12173, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0152 - mse: 0.0152 - mae: 0.0958 - val_loss: 0.1217 - val_mse: 0.1217 - val_mae: 0.2953 - lr: 0.0010 - 134ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.12173 to 0.11103, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0138 - mse: 0.0138 - mae: 0.0945 - val_loss: 0.1110 - val_mse: 0.1110 - val_mae: 0.2796 - lr: 0.0010 - 136ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.11103 to 0.10534, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0123 - mse: 0.0123 - mae: 0.0876 - val_loss: 0.1053 - val_mse: 0.1053 - val_mae: 0.2717 - lr: 0.0010 - 147ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.10534 to 0.09799, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0131 - mse: 0.0131 - mae: 0.0916 - val_loss: 0.0980 - val_mse: 0.0980 - val_mae: 0.2601 - lr: 0.0010 - 133ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.09799 to 0.09722, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0116 - mse: 0.0116 - mae: 0.0849 - val_loss: 0.0972 - val_mse: 0.0972 - val_mae: 0.2595 - lr: 0.0010 - 170ms/epoch - 10ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.09722 to 0.09071, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0811 - val_loss: 0.0907 - val_mse: 0.0907 - val_mae: 0.2488 - lr: 0.0010 - 190ms/epoch - 11ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.09071 to 0.08723, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0101 - mse: 0.0101 - mae: 0.0804 - val_loss: 0.0872 - val_mse: 0.0872 - val_mae: 0.2433 - lr: 0.0010 - 140ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.08723
17/17 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0799 - val_loss: 0.0899 - val_mse: 0.0899 - val_mae: 0.2479 - lr: 0.0010 - 127ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss improved from 0.08723 to 0.08630, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0802 - val_loss: 0.0863 - val_mse: 0.0863 - val_mae: 0.2412 - lr: 0.0010 - 143ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.08630 to 0.08057, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0086 - mse: 0.0086 - mae: 0.0744 - val_loss: 0.0806 - val_mse: 0.0806 - val_mae: 0.2320 - lr: 0.0010 - 150ms/epoch - 9ms/step
Epoch 20/500

Epoch 00020: val_loss improved from 0.08057 to 0.07233, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0768 - val_loss: 0.0723 - val_mse: 0.0723 - val_mae: 0.2176 - lr: 0.0010 - 161ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: val_loss improved from 0.07233 to 0.06957, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0728 - val_loss: 0.0696 - val_mse: 0.0696 - val_mae: 0.2127 - lr: 0.0010 - 134ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss improved from 0.06957 to 0.06787, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0715 - val_loss: 0.0679 - val_mse: 0.0679 - val_mae: 0.2097 - lr: 0.0010 - 147ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.06787
17/17 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0678 - val_loss: 0.0681 - val_mse: 0.0681 - val_mae: 0.2098 - lr: 0.0010 - 116ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss improved from 0.06787 to 0.06226, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0677 - val_loss: 0.0623 - val_mse: 0.0623 - val_mae: 0.1994 - lr: 0.0010 - 149ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss improved from 0.06226 to 0.06186, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0707 - val_loss: 0.0619 - val_mse: 0.0619 - val_mae: 0.1983 - lr: 0.0010 - 143ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss improved from 0.06186 to 0.05761, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0659 - val_loss: 0.0576 - val_mse: 0.0576 - val_mae: 0.1900 - lr: 0.0010 - 141ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss improved from 0.05761 to 0.05442, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0657 - val_loss: 0.0544 - val_mse: 0.0544 - val_mae: 0.1839 - lr: 0.0010 - 162ms/epoch - 10ms/step
Epoch 28/500

Epoch 00028: val_loss improved from 0.05442 to 0.04434, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0639 - val_loss: 0.0443 - val_mse: 0.0443 - val_mae: 0.1650 - lr: 0.0010 - 131ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss improved from 0.04434 to 0.04366, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0595 - val_loss: 0.0437 - val_mse: 0.0437 - val_mae: 0.1634 - lr: 0.0010 - 145ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04366
17/17 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0655 - val_loss: 0.0463 - val_mse: 0.0463 - val_mae: 0.1682 - lr: 0.0010 - 114ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04366
17/17 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0628 - val_loss: 0.0481 - val_mse: 0.0481 - val_mae: 0.1711 - lr: 0.0010 - 127ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04366
17/17 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0618 - val_loss: 0.0449 - val_mse: 0.0449 - val_mae: 0.1651 - lr: 0.0010 - 123ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss improved from 0.04366 to 0.03833, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0580 - val_loss: 0.0383 - val_mse: 0.0383 - val_mae: 0.1518 - lr: 0.0010 - 142ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss improved from 0.03833 to 0.03528, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0609 - val_loss: 0.0353 - val_mse: 0.0353 - val_mae: 0.1456 - lr: 0.0010 - 143ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss improved from 0.03528 to 0.03241, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0641 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1397 - lr: 0.0010 - 138ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.03241
17/17 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0604 - val_loss: 0.0350 - val_mse: 0.0350 - val_mae: 0.1447 - lr: 0.0010 - 121ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.03241
17/17 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0558 - val_loss: 0.0396 - val_mse: 0.0396 - val_mae: 0.1541 - lr: 0.0010 - 119ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.03241
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0539 - val_loss: 0.0428 - val_mse: 0.0428 - val_mae: 0.1605 - lr: 0.0010 - 124ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.03241
17/17 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0563 - val_loss: 0.0416 - val_mse: 0.0416 - val_mae: 0.1576 - lr: 0.0010 - 122ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00040: val_loss did not improve from 0.03241
17/17 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0565 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1384 - lr: 0.0010 - 136ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss improved from 0.03241 to 0.03231, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0533 - val_loss: 0.0323 - val_mse: 0.0323 - val_mae: 0.1377 - lr: 1.0000e-04 - 142ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss improved from 0.03231 to 0.03133, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0539 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1356 - lr: 1.0000e-04 - 163ms/epoch - 10ms/step
Epoch 43/500

Epoch 00043: val_loss improved from 0.03133 to 0.03084, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0539 - val_loss: 0.0308 - val_mse: 0.0308 - val_mae: 0.1346 - lr: 1.0000e-04 - 147ms/epoch - 9ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0534 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1352 - lr: 1.0000e-04 - 126ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0548 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1351 - lr: 1.0000e-04 - 131ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0538 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1354 - lr: 1.0000e-04 - 121ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0542 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1353 - lr: 1.0000e-04 - 124ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00048: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0539 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1353 - lr: 1.0000e-04 - 132ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0521 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1353 - lr: 1.0000e-05 - 125ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0521 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1352 - lr: 1.0000e-05 - 124ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0529 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1352 - lr: 1.0000e-05 - 125ms/epoch - 7ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0528 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1352 - lr: 1.0000e-05 - 125ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00053: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0511 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1353 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0526 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1353 - lr: 1.0000e-05 - 125ms/epoch - 7ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0542 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1353 - lr: 1.0000e-05 - 127ms/epoch - 7ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0519 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1351 - lr: 1.0000e-05 - 126ms/epoch - 7ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0538 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1351 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0544 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1351 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0543 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1349 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0510 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1350 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0511 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1351 - lr: 1.0000e-05 - 126ms/epoch - 7ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0513 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1351 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0540 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1350 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0504 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1350 - lr: 1.0000e-05 - 125ms/epoch - 7ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0509 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1350 - lr: 1.0000e-05 - 110ms/epoch - 6ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0507 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1351 - lr: 1.0000e-05 - 126ms/epoch - 7ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0522 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1352 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0523 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1353 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0520 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1354 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0517 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1354 - lr: 1.0000e-05 - 127ms/epoch - 7ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0529 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1355 - lr: 1.0000e-05 - 127ms/epoch - 7ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0537 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1355 - lr: 1.0000e-05 - 125ms/epoch - 7ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0522 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1356 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0526 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1358 - lr: 1.0000e-05 - 121ms/epoch - 7ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0519 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1357 - lr: 1.0000e-05 - 120ms/epoch - 7ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0513 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1358 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0545 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1358 - lr: 1.0000e-05 - 123ms/epoch - 7ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0509 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1358 - lr: 1.0000e-05 - 123ms/epoch - 7ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0517 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1358 - lr: 1.0000e-05 - 139ms/epoch - 8ms/step
Epoch 80/500

Epoch 00080: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0512 - val_loss: 0.0314 - val_mse: 0.0314 - val_mae: 0.1359 - lr: 1.0000e-05 - 124ms/epoch - 7ms/step
Epoch 81/500

Epoch 00081: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0535 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1361 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 82/500

Epoch 00082: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0526 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1362 - lr: 1.0000e-05 - 123ms/epoch - 7ms/step
Epoch 83/500

Epoch 00083: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0521 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1363 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 84/500

Epoch 00084: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0526 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1363 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 85/500

Epoch 00085: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0539 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1362 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 86/500

Epoch 00086: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0526 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1361 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 87/500

Epoch 00087: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0509 - val_loss: 0.0314 - val_mse: 0.0314 - val_mae: 0.1360 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 88/500

Epoch 00088: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0552 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1357 - lr: 1.0000e-05 - 124ms/epoch - 7ms/step
Epoch 89/500

Epoch 00089: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0536 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1354 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 90/500

Epoch 00090: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0529 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1354 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 91/500

Epoch 00091: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0523 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1353 - lr: 1.0000e-05 - 137ms/epoch - 8ms/step
Epoch 92/500

Epoch 00092: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0537 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1352 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 93/500

Epoch 00093: val_loss did not improve from 0.03084
17/17 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0516 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1353 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 00093: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 18.67415384478757 
RMSE:	 4.321360184570081 
MAPE:	 3.534296685764838

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 55.32374016164833 
RMSE:	 7.437993019736462 
MAPE:	 6.054411328729787

WMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 50.84988253427031 
RMSE:	 7.1309103580307545 
MAPE:	 5.537694007766219
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.56 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4436.126, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3965.317, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.51 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3736.589, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3598.951, Time=0.12 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.20 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.22 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3600.951, Time=0.25 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.069 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1795.475
Date:                Sun, 12 Dec 2021   AIC                           3598.951
Time:                        19:34:09   BIC                           3617.714
Sample:                             0   HQIC                          3606.157
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1983      0.003   -389.581      0.000      -1.204      -1.192
ar.L2         -0.8973      0.006   -139.732      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.649      0.000      -0.410      -0.387
sigma2         5.0573      0.023    215.292      0.000       5.011       5.103
===================================================================================
Ljung-Box (L1) (Q):                  14.41   Jarque-Bera (JB):           2460553.80
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.89
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.74
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.02121, saving model to LSTM3.h5
10/10 - 3s - loss: 0.1545 - mse: 0.1545 - mae: 0.3097 - val_loss: 0.0212 - val_mse: 0.0212 - val_mae: 0.1207 - lr: 0.0010 - 3s/epoch - 282ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0426 - mse: 0.0426 - mae: 0.1736 - val_loss: 0.0240 - val_mse: 0.0240 - val_mae: 0.1311 - lr: 0.0010 - 86ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0237 - mse: 0.0237 - mae: 0.1282 - val_loss: 0.0621 - val_mse: 0.0621 - val_mae: 0.2338 - lr: 0.0010 - 83ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0154 - mse: 0.0154 - mae: 0.0962 - val_loss: 0.1013 - val_mse: 0.1013 - val_mae: 0.3065 - lr: 0.0010 - 107ms/epoch - 11ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0182 - mse: 0.0182 - mae: 0.1056 - val_loss: 0.1152 - val_mse: 0.1152 - val_mae: 0.3287 - lr: 0.0010 - 88ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0145 - mse: 0.0145 - mae: 0.0947 - val_loss: 0.1292 - val_mse: 0.1292 - val_mae: 0.3493 - lr: 0.0010 - 78ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0118 - mse: 0.0118 - mae: 0.0874 - val_loss: 0.1303 - val_mse: 0.1303 - val_mae: 0.3509 - lr: 1.0000e-04 - 77ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0122 - mse: 0.0122 - mae: 0.0883 - val_loss: 0.1324 - val_mse: 0.1324 - val_mae: 0.3537 - lr: 1.0000e-04 - 86ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0114 - mse: 0.0114 - mae: 0.0853 - val_loss: 0.1344 - val_mse: 0.1344 - val_mae: 0.3565 - lr: 1.0000e-04 - 89ms/epoch - 9ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0116 - mse: 0.0116 - mae: 0.0846 - val_loss: 0.1361 - val_mse: 0.1361 - val_mae: 0.3587 - lr: 1.0000e-04 - 84ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0120 - mse: 0.0120 - mae: 0.0864 - val_loss: 0.1379 - val_mse: 0.1379 - val_mae: 0.3611 - lr: 1.0000e-04 - 93ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0119 - mse: 0.0119 - mae: 0.0848 - val_loss: 0.1381 - val_mse: 0.1381 - val_mae: 0.3614 - lr: 1.0000e-05 - 82ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0111 - mse: 0.0111 - mae: 0.0840 - val_loss: 0.1382 - val_mse: 0.1382 - val_mae: 0.3615 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0117 - mse: 0.0117 - mae: 0.0844 - val_loss: 0.1381 - val_mse: 0.1381 - val_mae: 0.3615 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0115 - mse: 0.0115 - mae: 0.0832 - val_loss: 0.1382 - val_mse: 0.1382 - val_mae: 0.3615 - lr: 1.0000e-05 - 95ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0123 - mse: 0.0123 - mae: 0.0890 - val_loss: 0.1382 - val_mse: 0.1382 - val_mae: 0.3616 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0113 - mse: 0.0113 - mae: 0.0848 - val_loss: 0.1383 - val_mse: 0.1383 - val_mae: 0.3618 - lr: 1.0000e-05 - 84ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0114 - mse: 0.0114 - mae: 0.0847 - val_loss: 0.1385 - val_mse: 0.1385 - val_mae: 0.3620 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0113 - mse: 0.0113 - mae: 0.0843 - val_loss: 0.1387 - val_mse: 0.1387 - val_mae: 0.3623 - lr: 1.0000e-05 - 84ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0115 - mse: 0.0115 - mae: 0.0852 - val_loss: 0.1389 - val_mse: 0.1389 - val_mae: 0.3625 - lr: 1.0000e-05 - 97ms/epoch - 10ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0124 - mse: 0.0124 - mae: 0.0888 - val_loss: 0.1388 - val_mse: 0.1388 - val_mae: 0.3624 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0118 - mse: 0.0118 - mae: 0.0863 - val_loss: 0.1387 - val_mse: 0.1387 - val_mae: 0.3623 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0108 - mse: 0.0108 - mae: 0.0826 - val_loss: 0.1388 - val_mse: 0.1388 - val_mae: 0.3624 - lr: 1.0000e-05 - 108ms/epoch - 11ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0110 - mse: 0.0110 - mae: 0.0851 - val_loss: 0.1389 - val_mse: 0.1389 - val_mae: 0.3625 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0118 - mse: 0.0118 - mae: 0.0881 - val_loss: 0.1392 - val_mse: 0.1392 - val_mae: 0.3629 - lr: 1.0000e-05 - 85ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0117 - mse: 0.0117 - mae: 0.0859 - val_loss: 0.1393 - val_mse: 0.1393 - val_mae: 0.3631 - lr: 1.0000e-05 - 91ms/epoch - 9ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0115 - mse: 0.0115 - mae: 0.0850 - val_loss: 0.1395 - val_mse: 0.1395 - val_mae: 0.3633 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0111 - mse: 0.0111 - mae: 0.0829 - val_loss: 0.1397 - val_mse: 0.1397 - val_mae: 0.3636 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0843 - val_loss: 0.1398 - val_mse: 0.1398 - val_mae: 0.3638 - lr: 1.0000e-05 - 94ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0113 - mse: 0.0113 - mae: 0.0839 - val_loss: 0.1401 - val_mse: 0.1401 - val_mae: 0.3641 - lr: 1.0000e-05 - 90ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0117 - mse: 0.0117 - mae: 0.0853 - val_loss: 0.1402 - val_mse: 0.1402 - val_mae: 0.3643 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0811 - val_loss: 0.1404 - val_mse: 0.1404 - val_mae: 0.3645 - lr: 1.0000e-05 - 82ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0116 - mse: 0.0116 - mae: 0.0855 - val_loss: 0.1405 - val_mse: 0.1405 - val_mae: 0.3647 - lr: 1.0000e-05 - 95ms/epoch - 10ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0118 - mse: 0.0118 - mae: 0.0840 - val_loss: 0.1405 - val_mse: 0.1405 - val_mae: 0.3646 - lr: 1.0000e-05 - 82ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0113 - mse: 0.0113 - mae: 0.0836 - val_loss: 0.1405 - val_mse: 0.1405 - val_mae: 0.3647 - lr: 1.0000e-05 - 83ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0118 - mse: 0.0118 - mae: 0.0855 - val_loss: 0.1406 - val_mse: 0.1406 - val_mae: 0.3647 - lr: 1.0000e-05 - 99ms/epoch - 10ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0118 - mse: 0.0118 - mae: 0.0851 - val_loss: 0.1406 - val_mse: 0.1406 - val_mae: 0.3648 - lr: 1.0000e-05 - 81ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0108 - mse: 0.0108 - mae: 0.0826 - val_loss: 0.1408 - val_mse: 0.1408 - val_mae: 0.3651 - lr: 1.0000e-05 - 106ms/epoch - 11ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0814 - val_loss: 0.1411 - val_mse: 0.1411 - val_mae: 0.3655 - lr: 1.0000e-05 - 83ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0816 - val_loss: 0.1411 - val_mse: 0.1411 - val_mae: 0.3655 - lr: 1.0000e-05 - 84ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0840 - val_loss: 0.1412 - val_mse: 0.1412 - val_mae: 0.3656 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0109 - mse: 0.0109 - mae: 0.0837 - val_loss: 0.1414 - val_mse: 0.1414 - val_mae: 0.3659 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0132 - mse: 0.0132 - mae: 0.0900 - val_loss: 0.1415 - val_mse: 0.1415 - val_mae: 0.3660 - lr: 1.0000e-05 - 81ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0111 - mse: 0.0111 - mae: 0.0842 - val_loss: 0.1416 - val_mse: 0.1416 - val_mae: 0.3661 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0851 - val_loss: 0.1416 - val_mse: 0.1416 - val_mae: 0.3662 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0117 - mse: 0.0117 - mae: 0.0838 - val_loss: 0.1417 - val_mse: 0.1417 - val_mae: 0.3662 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0832 - val_loss: 0.1417 - val_mse: 0.1417 - val_mae: 0.3663 - lr: 1.0000e-05 - 84ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0111 - mse: 0.0111 - mae: 0.0815 - val_loss: 0.1418 - val_mse: 0.1418 - val_mae: 0.3663 - lr: 1.0000e-05 - 95ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0113 - mse: 0.0113 - mae: 0.0841 - val_loss: 0.1419 - val_mse: 0.1419 - val_mae: 0.3665 - lr: 1.0000e-05 - 82ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0797 - val_loss: 0.1422 - val_mse: 0.1422 - val_mae: 0.3669 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.02121
10/10 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0845 - val_loss: 0.1424 - val_mse: 0.1424 - val_mae: 0.3672 - lr: 1.0000e-05 - 84ms/epoch - 8ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 18.67415384478757 
RMSE:	 4.321360184570081 
MAPE:	 3.534296685764838

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 55.32374016164833 
RMSE:	 7.437993019736462 
MAPE:	 6.054411328729787

WMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 50.84988253427031 
RMSE:	 7.1309103580307545 
MAPE:	 5.537694007766219

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 37.751390897405116 
RMSE:	 6.144216052305218 
MAPE:	 4.610910381239713
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.50 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4190.464, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3724.371, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.36 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3494.154, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3357.435, Time=0.13 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.59 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.98 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3359.435, Time=0.26 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.020 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1674.717
Date:                Sun, 12 Dec 2021   AIC                           3357.435
Time:                        19:35:38   BIC                           3376.198
Sample:                             0   HQIC                          3364.641
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1955      0.003   -381.246      0.000      -1.202      -1.189
ar.L2         -0.8964      0.007   -135.835      0.000      -0.909      -0.883
ar.L3         -0.3971      0.006    -67.229      0.000      -0.409      -0.385
sigma2         3.7466      0.018    211.623      0.000       3.712       3.781
===================================================================================
Ljung-Box (L1) (Q):                  14.20   Jarque-Bera (JB):           2338363.32
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                             3.76
Prob(H) (two-sided):                  0.00   Kurtosis:                       266.93
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.33672, saving model to LSTM3.h5
45/45 - 3s - loss: 0.3992 - mse: 0.3992 - mae: 0.5017 - val_loss: 0.3367 - val_mse: 0.3367 - val_mae: 0.5327 - lr: 0.0010 - 3s/epoch - 68ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.33672 to 0.20594, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0583 - mse: 0.0583 - mae: 0.1906 - val_loss: 0.2059 - val_mse: 0.2059 - val_mae: 0.4125 - lr: 0.0010 - 303ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.20594 to 0.08951, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0333 - mse: 0.0333 - mae: 0.1414 - val_loss: 0.0895 - val_mse: 0.0895 - val_mae: 0.2611 - lr: 0.0010 - 308ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.08951 to 0.03778, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0207 - mse: 0.0207 - mae: 0.1149 - val_loss: 0.0378 - val_mse: 0.0378 - val_mae: 0.1617 - lr: 0.0010 - 286ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.03778 to 0.02776, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0165 - mse: 0.0165 - mae: 0.1032 - val_loss: 0.0278 - val_mse: 0.0278 - val_mae: 0.1369 - lr: 0.0010 - 310ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.02776 to 0.02688, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0164 - mse: 0.0164 - mae: 0.1050 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1344 - lr: 0.0010 - 300ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.02688 to 0.02473, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0141 - mse: 0.0141 - mae: 0.0957 - val_loss: 0.0247 - val_mse: 0.0247 - val_mae: 0.1288 - lr: 0.0010 - 313ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.02473
45/45 - 0s - loss: 0.0126 - mse: 0.0126 - mae: 0.0911 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1502 - lr: 0.0010 - 269ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.02473
45/45 - 0s - loss: 0.0132 - mse: 0.0132 - mae: 0.0928 - val_loss: 0.0354 - val_mse: 0.0354 - val_mae: 0.1586 - lr: 0.0010 - 279ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.02473
45/45 - 0s - loss: 0.0127 - mse: 0.0127 - mae: 0.0901 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1553 - lr: 0.0010 - 286ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.02473
45/45 - 0s - loss: 0.0119 - mse: 0.0119 - mae: 0.0877 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1508 - lr: 0.0010 - 263ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00012: val_loss did not improve from 0.02473
45/45 - 0s - loss: 0.0122 - mse: 0.0122 - mae: 0.0893 - val_loss: 0.0340 - val_mse: 0.0340 - val_mae: 0.1580 - lr: 0.0010 - 290ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.02473 to 0.02314, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0200 - mse: 0.0200 - mae: 0.1158 - val_loss: 0.0231 - val_mse: 0.0231 - val_mae: 0.1262 - lr: 1.0000e-04 - 280ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.02314 to 0.02133, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0756 - val_loss: 0.0213 - val_mse: 0.0213 - val_mae: 0.1202 - lr: 1.0000e-04 - 295ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0673 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1216 - lr: 1.0000e-04 - 280ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0684 - val_loss: 0.0225 - val_mse: 0.0225 - val_mae: 0.1237 - lr: 1.0000e-04 - 273ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0720 - val_loss: 0.0234 - val_mse: 0.0234 - val_mae: 0.1263 - lr: 1.0000e-04 - 267ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0675 - val_loss: 0.0246 - val_mse: 0.0246 - val_mae: 0.1298 - lr: 1.0000e-04 - 275ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00019: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0686 - val_loss: 0.0253 - val_mse: 0.0253 - val_mae: 0.1317 - lr: 1.0000e-04 - 284ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0644 - val_loss: 0.0254 - val_mse: 0.0254 - val_mae: 0.1320 - lr: 1.0000e-05 - 267ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0636 - val_loss: 0.0254 - val_mse: 0.0254 - val_mae: 0.1322 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0648 - val_loss: 0.0254 - val_mse: 0.0254 - val_mae: 0.1321 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0637 - val_loss: 0.0255 - val_mse: 0.0255 - val_mae: 0.1324 - lr: 1.0000e-05 - 280ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00024: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0626 - val_loss: 0.0256 - val_mse: 0.0256 - val_mae: 0.1326 - lr: 1.0000e-05 - 244ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0624 - val_loss: 0.0257 - val_mse: 0.0257 - val_mae: 0.1329 - lr: 1.0000e-05 - 297ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0641 - val_loss: 0.0258 - val_mse: 0.0258 - val_mae: 0.1332 - lr: 1.0000e-05 - 289ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0654 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1334 - lr: 1.0000e-05 - 263ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0615 - val_loss: 0.0261 - val_mse: 0.0261 - val_mae: 0.1339 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0638 - val_loss: 0.0261 - val_mse: 0.0261 - val_mae: 0.1341 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0633 - val_loss: 0.0262 - val_mse: 0.0262 - val_mae: 0.1344 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0629 - val_loss: 0.0263 - val_mse: 0.0263 - val_mae: 0.1345 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0618 - val_loss: 0.0264 - val_mse: 0.0264 - val_mae: 0.1347 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0629 - val_loss: 0.0265 - val_mse: 0.0265 - val_mae: 0.1351 - lr: 1.0000e-05 - 282ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0633 - val_loss: 0.0265 - val_mse: 0.0265 - val_mae: 0.1351 - lr: 1.0000e-05 - 296ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0646 - val_loss: 0.0268 - val_mse: 0.0268 - val_mae: 0.1358 - lr: 1.0000e-05 - 255ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0623 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1362 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0654 - val_loss: 0.0270 - val_mse: 0.0270 - val_mae: 0.1365 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0609 - val_loss: 0.0271 - val_mse: 0.0271 - val_mae: 0.1367 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0617 - val_loss: 0.0272 - val_mse: 0.0272 - val_mae: 0.1369 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0644 - val_loss: 0.0273 - val_mse: 0.0273 - val_mae: 0.1371 - lr: 1.0000e-05 - 286ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0614 - val_loss: 0.0274 - val_mse: 0.0274 - val_mae: 0.1375 - lr: 1.0000e-05 - 296ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0602 - val_loss: 0.0275 - val_mse: 0.0275 - val_mae: 0.1378 - lr: 1.0000e-05 - 286ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0640 - val_loss: 0.0277 - val_mse: 0.0277 - val_mae: 0.1383 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0650 - val_loss: 0.0280 - val_mse: 0.0280 - val_mae: 0.1391 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0637 - val_loss: 0.0282 - val_mse: 0.0282 - val_mae: 0.1396 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0617 - val_loss: 0.0283 - val_mse: 0.0283 - val_mae: 0.1400 - lr: 1.0000e-05 - 295ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0626 - val_loss: 0.0285 - val_mse: 0.0285 - val_mae: 0.1404 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0615 - val_loss: 0.0286 - val_mse: 0.0286 - val_mae: 0.1408 - lr: 1.0000e-05 - 288ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0599 - val_loss: 0.0288 - val_mse: 0.0288 - val_mae: 0.1412 - lr: 1.0000e-05 - 285ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0615 - val_loss: 0.0289 - val_mse: 0.0289 - val_mae: 0.1413 - lr: 1.0000e-05 - 312ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0608 - val_loss: 0.0290 - val_mse: 0.0290 - val_mae: 0.1417 - lr: 1.0000e-05 - 285ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0598 - val_loss: 0.0292 - val_mse: 0.0292 - val_mae: 0.1423 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0633 - val_loss: 0.0294 - val_mse: 0.0294 - val_mae: 0.1428 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0580 - val_loss: 0.0295 - val_mse: 0.0295 - val_mae: 0.1432 - lr: 1.0000e-05 - 267ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0623 - val_loss: 0.0296 - val_mse: 0.0296 - val_mae: 0.1434 - lr: 1.0000e-05 - 302ms/epoch - 7ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0628 - val_loss: 0.0300 - val_mse: 0.0300 - val_mae: 0.1443 - lr: 1.0000e-05 - 286ms/epoch - 6ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0629 - val_loss: 0.0302 - val_mse: 0.0302 - val_mae: 0.1448 - lr: 1.0000e-05 - 280ms/epoch - 6ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0602 - val_loss: 0.0303 - val_mse: 0.0303 - val_mae: 0.1452 - lr: 1.0000e-05 - 297ms/epoch - 7ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0615 - val_loss: 0.0306 - val_mse: 0.0306 - val_mae: 0.1459 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0593 - val_loss: 0.0305 - val_mse: 0.0305 - val_mae: 0.1457 - lr: 1.0000e-05 - 255ms/epoch - 6ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0613 - val_loss: 0.0307 - val_mse: 0.0307 - val_mae: 0.1463 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0654 - val_loss: 0.0309 - val_mse: 0.0309 - val_mae: 0.1468 - lr: 1.0000e-05 - 281ms/epoch - 6ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0622 - val_loss: 0.0314 - val_mse: 0.0314 - val_mae: 0.1481 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.02133
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0617 - val_loss: 0.0317 - val_mse: 0.0317 - val_mae: 0.1488 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 00064: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 18.67415384478757 
RMSE:	 4.321360184570081 
MAPE:	 3.534296685764838

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 55.32374016164833 
RMSE:	 7.437993019736462 
MAPE:	 6.054411328729787

WMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 50.84988253427031 
RMSE:	 7.1309103580307545 
MAPE:	 5.537694007766219

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 37.751390897405116 
RMSE:	 6.144216052305218 
MAPE:	 4.610910381239713

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	51.12% Accuracy
MSE:	 36.41651471411913 
RMSE:	 6.034609740001348 
MAPE:	 4.797119170641582
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.51 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4212.289, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3747.746, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.32 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3523.401, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3387.759, Time=0.15 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.58 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.13 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3389.758, Time=0.29 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.186 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1689.879
Date:                Sun, 12 Dec 2021   AIC                           3387.759
Time:                        19:37:29   BIC                           3406.522
Sample:                             0   HQIC                          3394.964
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1878      0.003   -345.315      0.000      -1.195      -1.181
ar.L2         -0.8876      0.007   -121.809      0.000      -0.902      -0.873
ar.L3         -0.3957      0.007    -60.127      0.000      -0.409      -0.383
sigma2         3.8904      0.020    193.404      0.000       3.851       3.930
===================================================================================
Ljung-Box (L1) (Q):                  13.21   Jarque-Bera (JB):           1659080.01
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.08   Skew:                             3.28
Prob(H) (two-sided):                  0.00   Kurtosis:                       225.31
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.28108, saving model to LSTM3.h5
58/58 - 3s - loss: 0.3504 - mse: 0.3504 - mae: 0.4238 - val_loss: 0.2811 - val_mse: 0.2811 - val_mae: 0.4826 - lr: 0.0010 - 3s/epoch - 54ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.28108 to 0.05179, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0456 - mse: 0.0456 - mae: 0.1608 - val_loss: 0.0518 - val_mse: 0.0518 - val_mae: 0.1749 - lr: 0.0010 - 349ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.05179 to 0.02271, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0261 - mse: 0.0261 - mae: 0.1261 - val_loss: 0.0227 - val_mse: 0.0227 - val_mae: 0.1283 - lr: 0.0010 - 351ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0169 - mse: 0.0169 - mae: 0.1020 - val_loss: 0.0448 - val_mse: 0.0448 - val_mae: 0.1888 - lr: 0.0010 - 369ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0147 - mse: 0.0147 - mae: 0.0980 - val_loss: 0.0506 - val_mse: 0.0506 - val_mae: 0.2041 - lr: 0.0010 - 335ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0136 - mse: 0.0136 - mae: 0.0911 - val_loss: 0.0397 - val_mse: 0.0397 - val_mae: 0.1788 - lr: 0.0010 - 315ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0130 - mse: 0.0130 - mae: 0.0904 - val_loss: 0.0361 - val_mse: 0.0361 - val_mae: 0.1702 - lr: 0.0010 - 390ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00008: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0113 - mse: 0.0113 - mae: 0.0848 - val_loss: 0.0366 - val_mse: 0.0366 - val_mae: 0.1723 - lr: 0.0010 - 335ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0202 - mse: 0.0202 - mae: 0.1149 - val_loss: 0.0486 - val_mse: 0.0486 - val_mae: 0.2018 - lr: 1.0000e-04 - 347ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0110 - mse: 0.0110 - mae: 0.0822 - val_loss: 0.0494 - val_mse: 0.0494 - val_mae: 0.2036 - lr: 1.0000e-04 - 344ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0094 - mse: 0.0094 - mae: 0.0761 - val_loss: 0.0468 - val_mse: 0.0468 - val_mae: 0.1976 - lr: 1.0000e-04 - 353ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0742 - val_loss: 0.0432 - val_mse: 0.0432 - val_mae: 0.1889 - lr: 1.0000e-04 - 352ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00013: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0086 - mse: 0.0086 - mae: 0.0737 - val_loss: 0.0393 - val_mse: 0.0393 - val_mae: 0.1792 - lr: 1.0000e-04 - 347ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0744 - val_loss: 0.0392 - val_mse: 0.0392 - val_mae: 0.1790 - lr: 1.0000e-05 - 331ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0704 - val_loss: 0.0390 - val_mse: 0.0390 - val_mae: 0.1785 - lr: 1.0000e-05 - 350ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0699 - val_loss: 0.0389 - val_mse: 0.0389 - val_mae: 0.1781 - lr: 1.0000e-05 - 353ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0701 - val_loss: 0.0389 - val_mse: 0.0389 - val_mae: 0.1781 - lr: 1.0000e-05 - 325ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00018: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0718 - val_loss: 0.0386 - val_mse: 0.0386 - val_mae: 0.1776 - lr: 1.0000e-05 - 372ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0703 - val_loss: 0.0384 - val_mse: 0.0384 - val_mae: 0.1769 - lr: 1.0000e-05 - 329ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0704 - val_loss: 0.0380 - val_mse: 0.0380 - val_mae: 0.1759 - lr: 1.0000e-05 - 350ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0705 - val_loss: 0.0378 - val_mse: 0.0378 - val_mae: 0.1755 - lr: 1.0000e-05 - 358ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0689 - val_loss: 0.0373 - val_mse: 0.0373 - val_mae: 0.1741 - lr: 1.0000e-05 - 343ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0704 - val_loss: 0.0370 - val_mse: 0.0370 - val_mae: 0.1734 - lr: 1.0000e-05 - 357ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0692 - val_loss: 0.0367 - val_mse: 0.0367 - val_mae: 0.1724 - lr: 1.0000e-05 - 341ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0735 - val_loss: 0.0364 - val_mse: 0.0364 - val_mae: 0.1717 - lr: 1.0000e-05 - 344ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0691 - val_loss: 0.0361 - val_mse: 0.0361 - val_mae: 0.1710 - lr: 1.0000e-05 - 349ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0705 - val_loss: 0.0359 - val_mse: 0.0359 - val_mae: 0.1705 - lr: 1.0000e-05 - 369ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0721 - val_loss: 0.0357 - val_mse: 0.0357 - val_mae: 0.1700 - lr: 1.0000e-05 - 327ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0713 - val_loss: 0.0354 - val_mse: 0.0354 - val_mae: 0.1689 - lr: 1.0000e-05 - 348ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0686 - val_loss: 0.0350 - val_mse: 0.0350 - val_mae: 0.1679 - lr: 1.0000e-05 - 335ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0704 - val_loss: 0.0344 - val_mse: 0.0344 - val_mae: 0.1662 - lr: 1.0000e-05 - 339ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0705 - val_loss: 0.0340 - val_mse: 0.0340 - val_mae: 0.1650 - lr: 1.0000e-05 - 353ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0691 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1636 - lr: 1.0000e-05 - 338ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0710 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1627 - lr: 1.0000e-05 - 323ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0709 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1618 - lr: 1.0000e-05 - 418ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0679 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1611 - lr: 1.0000e-05 - 333ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0669 - val_loss: 0.0322 - val_mse: 0.0322 - val_mae: 0.1602 - lr: 1.0000e-05 - 340ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0685 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1596 - lr: 1.0000e-05 - 359ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0681 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1584 - lr: 1.0000e-05 - 338ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0700 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1574 - lr: 1.0000e-05 - 323ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0705 - val_loss: 0.0309 - val_mse: 0.0309 - val_mae: 0.1562 - lr: 1.0000e-05 - 349ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0684 - val_loss: 0.0305 - val_mse: 0.0305 - val_mae: 0.1552 - lr: 1.0000e-05 - 340ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0685 - val_loss: 0.0302 - val_mse: 0.0302 - val_mae: 0.1543 - lr: 1.0000e-05 - 331ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0718 - val_loss: 0.0299 - val_mse: 0.0299 - val_mae: 0.1534 - lr: 1.0000e-05 - 353ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0691 - val_loss: 0.0298 - val_mse: 0.0298 - val_mae: 0.1531 - lr: 1.0000e-05 - 345ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0703 - val_loss: 0.0295 - val_mse: 0.0295 - val_mae: 0.1523 - lr: 1.0000e-05 - 357ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0676 - val_loss: 0.0291 - val_mse: 0.0291 - val_mae: 0.1510 - lr: 1.0000e-05 - 343ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0694 - val_loss: 0.0285 - val_mse: 0.0285 - val_mae: 0.1492 - lr: 1.0000e-05 - 337ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0694 - val_loss: 0.0283 - val_mse: 0.0283 - val_mae: 0.1487 - lr: 1.0000e-05 - 370ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0680 - val_loss: 0.0278 - val_mse: 0.0278 - val_mae: 0.1470 - lr: 1.0000e-05 - 330ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0683 - val_loss: 0.0273 - val_mse: 0.0273 - val_mae: 0.1456 - lr: 1.0000e-05 - 341ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0712 - val_loss: 0.0270 - val_mse: 0.0270 - val_mae: 0.1446 - lr: 1.0000e-05 - 360ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.02271
58/58 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0663 - val_loss: 0.0265 - val_mse: 0.0265 - val_mae: 0.1431 - lr: 1.0000e-05 - 358ms/epoch - 6ms/step
Epoch 00053: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 18.67415384478757 
RMSE:	 4.321360184570081 
MAPE:	 3.534296685764838

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 55.32374016164833 
RMSE:	 7.437993019736462 
MAPE:	 6.054411328729787

WMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 50.84988253427031 
RMSE:	 7.1309103580307545 
MAPE:	 5.537694007766219

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 37.751390897405116 
RMSE:	 6.144216052305218 
MAPE:	 4.610910381239713

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	51.12% Accuracy
MSE:	 36.41651471411913 
RMSE:	 6.034609740001348 
MAPE:	 4.797119170641582

MIDPOINT
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 142.27347883578213 
RMSE:	 11.927844685264063 
MAPE:	 10.30348805298139
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.49 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4414.515, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3944.062, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.49 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3715.173, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3577.471, Time=0.11 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.88 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.77 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3579.471, Time=0.26 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.195 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1784.736
Date:                Sun, 12 Dec 2021   AIC                           3577.471
Time:                        19:39:21   BIC                           3596.235
Sample:                             0   HQIC                          3584.677
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.844      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.861      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.862      0.000      -0.410      -0.387
sigma2         4.9242      0.023    215.469      0.000       4.879       4.969
===================================================================================
Ljung-Box (L1) (Q):                  14.55   Jarque-Bera (JB):           2468024.38
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       274.15
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.35924, saving model to LSTM3.h5
43/43 - 3s - loss: 0.0883 - mse: 0.0883 - mae: 0.2265 - val_loss: 0.3592 - val_mse: 0.3592 - val_mae: 0.5553 - lr: 0.0010 - 3s/epoch - 73ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.35924 to 0.25624, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0135 - mse: 0.0135 - mae: 0.0925 - val_loss: 0.2562 - val_mse: 0.2562 - val_mae: 0.4608 - lr: 0.0010 - 283ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.25624 to 0.22292, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0109 - mse: 0.0109 - mae: 0.0827 - val_loss: 0.2229 - val_mse: 0.2229 - val_mae: 0.4263 - lr: 0.0010 - 302ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.22292 to 0.19895, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0813 - val_loss: 0.1989 - val_mse: 0.1989 - val_mae: 0.3999 - lr: 0.0010 - 313ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.19895
43/43 - 0s - loss: 0.0096 - mse: 0.0096 - mae: 0.0773 - val_loss: 0.2254 - val_mse: 0.2254 - val_mae: 0.4295 - lr: 0.0010 - 261ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.19895 to 0.19053, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0819 - val_loss: 0.1905 - val_mse: 0.1905 - val_mae: 0.3914 - lr: 0.0010 - 302ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.19053
43/43 - 0s - loss: 0.0099 - mse: 0.0099 - mae: 0.0781 - val_loss: 0.2219 - val_mse: 0.2219 - val_mae: 0.4270 - lr: 0.0010 - 268ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.19053
43/43 - 0s - loss: 0.0109 - mse: 0.0109 - mae: 0.0802 - val_loss: 0.2211 - val_mse: 0.2211 - val_mae: 0.4254 - lr: 0.0010 - 253ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.19053
43/43 - 0s - loss: 0.0118 - mse: 0.0118 - mae: 0.0823 - val_loss: 0.2407 - val_mse: 0.2407 - val_mae: 0.4453 - lr: 0.0010 - 290ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.19053
43/43 - 0s - loss: 0.0120 - mse: 0.0120 - mae: 0.0846 - val_loss: 0.2614 - val_mse: 0.2614 - val_mae: 0.4678 - lr: 0.0010 - 272ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00011: val_loss did not improve from 0.19053
43/43 - 0s - loss: 0.0125 - mse: 0.0125 - mae: 0.0866 - val_loss: 0.2550 - val_mse: 0.2550 - val_mae: 0.4611 - lr: 0.0010 - 273ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.19053 to 0.17007, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0338 - mse: 0.0338 - mae: 0.1518 - val_loss: 0.1701 - val_mse: 0.1701 - val_mae: 0.3675 - lr: 1.0000e-04 - 303ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.17007 to 0.16824, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0772 - val_loss: 0.1682 - val_mse: 0.1682 - val_mae: 0.3650 - lr: 1.0000e-04 - 306ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0086 - mse: 0.0086 - mae: 0.0752 - val_loss: 0.1711 - val_mse: 0.1711 - val_mae: 0.3682 - lr: 1.0000e-04 - 256ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0715 - val_loss: 0.1758 - val_mse: 0.1758 - val_mae: 0.3736 - lr: 1.0000e-04 - 273ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0720 - val_loss: 0.1784 - val_mse: 0.1784 - val_mae: 0.3765 - lr: 1.0000e-04 - 292ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0690 - val_loss: 0.1805 - val_mse: 0.1805 - val_mae: 0.3785 - lr: 1.0000e-04 - 255ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00018: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0675 - val_loss: 0.1842 - val_mse: 0.1842 - val_mae: 0.3826 - lr: 1.0000e-04 - 285ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0631 - val_loss: 0.1838 - val_mse: 0.1838 - val_mae: 0.3822 - lr: 1.0000e-05 - 290ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0631 - val_loss: 0.1835 - val_mse: 0.1835 - val_mae: 0.3818 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0630 - val_loss: 0.1839 - val_mse: 0.1839 - val_mae: 0.3822 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0639 - val_loss: 0.1835 - val_mse: 0.1835 - val_mae: 0.3817 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00023: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0623 - val_loss: 0.1833 - val_mse: 0.1833 - val_mae: 0.3815 - lr: 1.0000e-05 - 261ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0638 - val_loss: 0.1833 - val_mse: 0.1833 - val_mae: 0.3814 - lr: 1.0000e-05 - 267ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0652 - val_loss: 0.1837 - val_mse: 0.1837 - val_mae: 0.3819 - lr: 1.0000e-05 - 260ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0633 - val_loss: 0.1841 - val_mse: 0.1841 - val_mae: 0.3823 - lr: 1.0000e-05 - 295ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0624 - val_loss: 0.1849 - val_mse: 0.1849 - val_mae: 0.3832 - lr: 1.0000e-05 - 292ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0597 - val_loss: 0.1852 - val_mse: 0.1852 - val_mae: 0.3836 - lr: 1.0000e-05 - 261ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0663 - val_loss: 0.1851 - val_mse: 0.1851 - val_mae: 0.3834 - lr: 1.0000e-05 - 288ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0608 - val_loss: 0.1856 - val_mse: 0.1856 - val_mae: 0.3840 - lr: 1.0000e-05 - 302ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0645 - val_loss: 0.1855 - val_mse: 0.1855 - val_mae: 0.3839 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0631 - val_loss: 0.1854 - val_mse: 0.1854 - val_mae: 0.3837 - lr: 1.0000e-05 - 260ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0589 - val_loss: 0.1862 - val_mse: 0.1862 - val_mae: 0.3845 - lr: 1.0000e-05 - 295ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0637 - val_loss: 0.1864 - val_mse: 0.1864 - val_mae: 0.3847 - lr: 1.0000e-05 - 287ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0620 - val_loss: 0.1870 - val_mse: 0.1870 - val_mae: 0.3854 - lr: 1.0000e-05 - 252ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0602 - val_loss: 0.1870 - val_mse: 0.1870 - val_mae: 0.3854 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0610 - val_loss: 0.1872 - val_mse: 0.1872 - val_mae: 0.3856 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0615 - val_loss: 0.1879 - val_mse: 0.1879 - val_mae: 0.3864 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0611 - val_loss: 0.1884 - val_mse: 0.1884 - val_mae: 0.3870 - lr: 1.0000e-05 - 261ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0625 - val_loss: 0.1891 - val_mse: 0.1891 - val_mae: 0.3877 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0611 - val_loss: 0.1896 - val_mse: 0.1896 - val_mae: 0.3883 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0624 - val_loss: 0.1895 - val_mse: 0.1895 - val_mae: 0.3882 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0631 - val_loss: 0.1902 - val_mse: 0.1902 - val_mae: 0.3890 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0603 - val_loss: 0.1907 - val_mse: 0.1907 - val_mae: 0.3895 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0621 - val_loss: 0.1902 - val_mse: 0.1902 - val_mae: 0.3889 - lr: 1.0000e-05 - 257ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0596 - val_loss: 0.1908 - val_mse: 0.1908 - val_mae: 0.3895 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0624 - val_loss: 0.1912 - val_mse: 0.1912 - val_mae: 0.3899 - lr: 1.0000e-05 - 263ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0619 - val_loss: 0.1917 - val_mse: 0.1917 - val_mae: 0.3905 - lr: 1.0000e-05 - 263ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0614 - val_loss: 0.1922 - val_mse: 0.1922 - val_mae: 0.3910 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0627 - val_loss: 0.1923 - val_mse: 0.1923 - val_mae: 0.3911 - lr: 1.0000e-05 - 264ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0602 - val_loss: 0.1928 - val_mse: 0.1928 - val_mae: 0.3917 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0617 - val_loss: 0.1926 - val_mse: 0.1926 - val_mae: 0.3914 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0604 - val_loss: 0.1930 - val_mse: 0.1930 - val_mae: 0.3918 - lr: 1.0000e-05 - 264ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0581 - val_loss: 0.1933 - val_mse: 0.1933 - val_mae: 0.3922 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0592 - val_loss: 0.1929 - val_mse: 0.1929 - val_mae: 0.3916 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0611 - val_loss: 0.1924 - val_mse: 0.1924 - val_mae: 0.3909 - lr: 1.0000e-05 - 267ms/epoch - 6ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0585 - val_loss: 0.1920 - val_mse: 0.1920 - val_mae: 0.3904 - lr: 1.0000e-05 - 242ms/epoch - 6ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0599 - val_loss: 0.1925 - val_mse: 0.1925 - val_mae: 0.3910 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0614 - val_loss: 0.1938 - val_mse: 0.1938 - val_mae: 0.3924 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0610 - val_loss: 0.1947 - val_mse: 0.1947 - val_mae: 0.3934 - lr: 1.0000e-05 - 240ms/epoch - 6ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0583 - val_loss: 0.1956 - val_mse: 0.1956 - val_mae: 0.3945 - lr: 1.0000e-05 - 285ms/epoch - 7ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0594 - val_loss: 0.1959 - val_mse: 0.1959 - val_mae: 0.3947 - lr: 1.0000e-05 - 283ms/epoch - 7ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.16824
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0575 - val_loss: 0.1958 - val_mse: 0.1958 - val_mae: 0.3946 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step
Epoch 00063: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 18.67415384478757 
RMSE:	 4.321360184570081 
MAPE:	 3.534296685764838

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 55.32374016164833 
RMSE:	 7.437993019736462 
MAPE:	 6.054411328729787

WMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 50.84988253427031 
RMSE:	 7.1309103580307545 
MAPE:	 5.537694007766219

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 37.751390897405116 
RMSE:	 6.144216052305218 
MAPE:	 4.610910381239713

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	51.12% Accuracy
MSE:	 36.41651471411913 
RMSE:	 6.034609740001348 
MAPE:	 4.797119170641582

MIDPOINT
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 142.27347883578213 
RMSE:	 11.927844685264063 
MAPE:	 10.30348805298139

T3
Prediction vs Close:		51.12% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 37.64075623558134 
RMSE:	 6.135206291200104 
MAPE:	 5.0145827751195515
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.66 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4352.703, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3889.412, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.34 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3689.930, Time=0.07 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3574.245, Time=0.11 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.49 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.02 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3576.245, Time=0.24 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.043 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1783.123
Date:                Sun, 12 Dec 2021   AIC                           3574.245
Time:                        19:41:04   BIC                           3593.008
Sample:                             0   HQIC                          3581.451
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1480      0.004   -302.430      0.000      -1.155      -1.141
ar.L2         -0.8300      0.008    -99.682      0.000      -0.846      -0.814
ar.L3         -0.3687      0.007    -50.527      0.000      -0.383      -0.354
sigma2         4.9055      0.028    175.970      0.000       4.851       4.960
===================================================================================
Ljung-Box (L1) (Q):                  11.61   Jarque-Bera (JB):           1261976.58
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.16   Skew:                             2.52
Prob(H) (two-sided):                  0.00   Kurtosis:                       196.90
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04748, saving model to LSTM3.h5
90/90 - 3s - loss: 0.0655 - mse: 0.0655 - mae: 0.1772 - val_loss: 0.0475 - val_mse: 0.0475 - val_mae: 0.1684 - lr: 0.0010 - 3s/epoch - 37ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.04748 to 0.02168, saving model to LSTM3.h5
90/90 - 1s - loss: 0.0181 - mse: 0.0181 - mae: 0.1063 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1142 - lr: 0.0010 - 575ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.02168
90/90 - 1s - loss: 0.0141 - mse: 0.0141 - mae: 0.0926 - val_loss: 0.0346 - val_mse: 0.0346 - val_mae: 0.1451 - lr: 0.0010 - 566ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.02168
90/90 - 1s - loss: 0.0125 - mse: 0.0125 - mae: 0.0890 - val_loss: 0.0245 - val_mse: 0.0245 - val_mae: 0.1224 - lr: 0.0010 - 595ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.02168
90/90 - 1s - loss: 0.0150 - mse: 0.0150 - mae: 0.0978 - val_loss: 0.0409 - val_mse: 0.0409 - val_mae: 0.1672 - lr: 0.0010 - 566ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.02168
90/90 - 1s - loss: 0.0224 - mse: 0.0224 - mae: 0.1142 - val_loss: 0.0511 - val_mse: 0.0511 - val_mae: 0.1912 - lr: 0.0010 - 517ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.02168
90/90 - 1s - loss: 0.0285 - mse: 0.0285 - mae: 0.1222 - val_loss: 0.0412 - val_mse: 0.0412 - val_mae: 0.1672 - lr: 0.0010 - 565ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.02168 to 0.01361, saving model to LSTM3.h5
90/90 - 1s - loss: 0.0318 - mse: 0.0318 - mae: 0.1266 - val_loss: 0.0136 - val_mse: 0.0136 - val_mae: 0.0938 - lr: 1.0000e-04 - 583ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0075 - mse: 0.0075 - mae: 0.0666 - val_loss: 0.0168 - val_mse: 0.0168 - val_mae: 0.1017 - lr: 1.0000e-04 - 505ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.01361
90/90 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0596 - val_loss: 0.0207 - val_mse: 0.0207 - val_mae: 0.1122 - lr: 1.0000e-04 - 500ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0059 - mse: 0.0059 - mae: 0.0599 - val_loss: 0.0244 - val_mse: 0.0244 - val_mae: 0.1218 - lr: 1.0000e-04 - 565ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0054 - mse: 0.0054 - mae: 0.0572 - val_loss: 0.0263 - val_mse: 0.0263 - val_mae: 0.1269 - lr: 1.0000e-04 - 500ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00013: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0054 - mse: 0.0054 - mae: 0.0558 - val_loss: 0.0286 - val_mse: 0.0286 - val_mae: 0.1329 - lr: 1.0000e-04 - 539ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0052 - mse: 0.0052 - mae: 0.0552 - val_loss: 0.0282 - val_mse: 0.0282 - val_mae: 0.1317 - lr: 1.0000e-05 - 566ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0052 - mse: 0.0052 - mae: 0.0560 - val_loss: 0.0281 - val_mse: 0.0281 - val_mae: 0.1316 - lr: 1.0000e-05 - 533ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0052 - mse: 0.0052 - mae: 0.0565 - val_loss: 0.0283 - val_mse: 0.0283 - val_mae: 0.1319 - lr: 1.0000e-05 - 507ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0047 - mse: 0.0047 - mae: 0.0538 - val_loss: 0.0282 - val_mse: 0.0282 - val_mae: 0.1316 - lr: 1.0000e-05 - 520ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00018: val_loss did not improve from 0.01361
90/90 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0531 - val_loss: 0.0284 - val_mse: 0.0284 - val_mae: 0.1322 - lr: 1.0000e-05 - 495ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0051 - mse: 0.0051 - mae: 0.0560 - val_loss: 0.0287 - val_mse: 0.0287 - val_mae: 0.1331 - lr: 1.0000e-05 - 526ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0048 - mse: 0.0048 - mae: 0.0546 - val_loss: 0.0293 - val_mse: 0.0293 - val_mae: 0.1345 - lr: 1.0000e-05 - 580ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0046 - mse: 0.0046 - mae: 0.0521 - val_loss: 0.0295 - val_mse: 0.0295 - val_mae: 0.1350 - lr: 1.0000e-05 - 581ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0052 - mse: 0.0052 - mae: 0.0555 - val_loss: 0.0296 - val_mse: 0.0296 - val_mae: 0.1353 - lr: 1.0000e-05 - 511ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0047 - mse: 0.0047 - mae: 0.0536 - val_loss: 0.0296 - val_mse: 0.0296 - val_mae: 0.1353 - lr: 1.0000e-05 - 579ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0046 - mse: 0.0046 - mae: 0.0528 - val_loss: 0.0298 - val_mse: 0.0298 - val_mae: 0.1359 - lr: 1.0000e-05 - 533ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0049 - mse: 0.0049 - mae: 0.0550 - val_loss: 0.0303 - val_mse: 0.0303 - val_mae: 0.1372 - lr: 1.0000e-05 - 500ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0048 - mse: 0.0048 - mae: 0.0551 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1389 - lr: 1.0000e-05 - 531ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0052 - mse: 0.0052 - mae: 0.0551 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1396 - lr: 1.0000e-05 - 572ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0044 - mse: 0.0044 - mae: 0.0513 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1418 - lr: 1.0000e-05 - 520ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01361
90/90 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0556 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1415 - lr: 1.0000e-05 - 499ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0047 - mse: 0.0047 - mae: 0.0522 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1410 - lr: 1.0000e-05 - 506ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0051 - mse: 0.0051 - mae: 0.0555 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1412 - lr: 1.0000e-05 - 511ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0046 - mse: 0.0046 - mae: 0.0531 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1418 - lr: 1.0000e-05 - 501ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0049 - mse: 0.0049 - mae: 0.0552 - val_loss: 0.0322 - val_mse: 0.0322 - val_mae: 0.1420 - lr: 1.0000e-05 - 500ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0043 - mse: 0.0043 - mae: 0.0517 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1413 - lr: 1.0000e-05 - 501ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0052 - mse: 0.0052 - mae: 0.0548 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1415 - lr: 1.0000e-05 - 554ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0046 - mse: 0.0046 - mae: 0.0536 - val_loss: 0.0322 - val_mse: 0.0322 - val_mae: 0.1421 - lr: 1.0000e-05 - 501ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0045 - mse: 0.0045 - mae: 0.0525 - val_loss: 0.0323 - val_mse: 0.0323 - val_mae: 0.1423 - lr: 1.0000e-05 - 584ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01361
90/90 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0554 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1416 - lr: 1.0000e-05 - 496ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0043 - mse: 0.0043 - mae: 0.0519 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1433 - lr: 1.0000e-05 - 522ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0047 - mse: 0.0047 - mae: 0.0533 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1439 - lr: 1.0000e-05 - 580ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0049 - mse: 0.0049 - mae: 0.0545 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1427 - lr: 1.0000e-05 - 571ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0046 - mse: 0.0046 - mae: 0.0521 - val_loss: 0.0323 - val_mse: 0.0323 - val_mae: 0.1424 - lr: 1.0000e-05 - 573ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0047 - mse: 0.0047 - mae: 0.0531 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1428 - lr: 1.0000e-05 - 526ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0041 - mse: 0.0041 - mae: 0.0498 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1441 - lr: 1.0000e-05 - 516ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0044 - mse: 0.0044 - mae: 0.0515 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1440 - lr: 1.0000e-05 - 510ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0047 - mse: 0.0047 - mae: 0.0533 - val_loss: 0.0332 - val_mse: 0.0332 - val_mae: 0.1448 - lr: 1.0000e-05 - 518ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0042 - mse: 0.0042 - mae: 0.0511 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1434 - lr: 1.0000e-05 - 568ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01361
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0516 - val_loss: 0.0325 - val_mse: 0.0325 - val_mae: 0.1431 - lr: 1.0000e-05 - 495ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0041 - mse: 0.0041 - mae: 0.0499 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1448 - lr: 1.0000e-05 - 512ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0047 - mse: 0.0047 - mae: 0.0542 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1449 - lr: 1.0000e-05 - 581ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0041 - mse: 0.0041 - mae: 0.0504 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1465 - lr: 1.0000e-05 - 573ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01361
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0527 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1461 - lr: 1.0000e-05 - 495ms/epoch - 5ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0041 - mse: 0.0041 - mae: 0.0508 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1476 - lr: 1.0000e-05 - 579ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0046 - mse: 0.0046 - mae: 0.0530 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1474 - lr: 1.0000e-05 - 571ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0038 - mse: 0.0038 - mae: 0.0486 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1476 - lr: 1.0000e-05 - 553ms/epoch - 6ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0048 - mse: 0.0048 - mae: 0.0535 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1459 - lr: 1.0000e-05 - 563ms/epoch - 6ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0046 - mse: 0.0046 - mae: 0.0528 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1442 - lr: 1.0000e-05 - 568ms/epoch - 6ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.01361
90/90 - 1s - loss: 0.0044 - mse: 0.0044 - mae: 0.0520 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1458 - lr: 1.0000e-05 - 516ms/epoch - 6ms/step
Epoch 00058: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 18.67415384478757 
RMSE:	 4.321360184570081 
MAPE:	 3.534296685764838

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 55.32374016164833 
RMSE:	 7.437993019736462 
MAPE:	 6.054411328729787

WMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 50.84988253427031 
RMSE:	 7.1309103580307545 
MAPE:	 5.537694007766219

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 37.751390897405116 
RMSE:	 6.144216052305218 
MAPE:	 4.610910381239713

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	51.12% Accuracy
MSE:	 36.41651471411913 
RMSE:	 6.034609740001348 
MAPE:	 4.797119170641582

MIDPOINT
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 142.27347883578213 
RMSE:	 11.927844685264063 
MAPE:	 10.30348805298139

T3
Prediction vs Close:		51.12% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 37.64075623558134 
RMSE:	 6.135206291200104 
MAPE:	 5.0145827751195515

TEMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 61.72382816732521 
RMSE:	 7.856451372427962 
MAPE:	 7.166001671992897
Runtime: mins: 14.323029246266666

Architecture Used

In [ ]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving Experiment3.png to Experiment3 (2).png
In [ ]:
img = cv2.imread('Experiment3.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[ ]:
<matplotlib.image.AxesImage at 0x7f4c2372be90>

Model Plots

In [161]:
with open('simulation3_data.json') as json_file:
    simulation3 = json.load(json_file)
fileimg = 'Experiment3'
In [162]:
for i in range(len(list(simulation3.keys()))):
  SIM = list(simulation3.keys())[i]
  plot_train(simulation3,SIM)
  plot_test(simulation3,SIM)
----- Train RMSE for SMA ----- 8.884133743304334
----- Train_MSE_LSTM for SMA ----- 78.92783236891867
----- Train MAE LSTM for SMA ----- 7.781090163518295
----- Test RMSE for SMA----- 4.321360184570081
----- Test_MSE_LSTM for SMA----- 18.67415384478757
----- Test_MAE_LSTM for SMA----- 3.534296685764838
----- Train RMSE for EMA ----- 10.435802766835762
----- Train_MSE_LSTM for EMA ----- 108.90597938829693
----- Train MAE LSTM for EMA ----- 9.400510274555584
----- Test RMSE for EMA----- 7.437993019736462
----- Test_MSE_LSTM for EMA----- 55.32374016164833
----- Test_MAE_LSTM for EMA----- 6.054411328729787
----- Train RMSE for WMA ----- 10.964398903404335
----- Train_MSE_LSTM for WMA ----- 120.21804331297417
----- Train MAE LSTM for WMA ----- 9.877309820725408
----- Test RMSE for WMA----- 7.1309103580307545
----- Test_MSE_LSTM for WMA----- 50.84988253427031
----- Test_MAE_LSTM for WMA----- 5.537694007766219
----- Train RMSE for DEMA ----- 13.217609293554304
----- Train_MSE_LSTM for DEMA ----- 174.7051954370531
----- Train MAE LSTM for DEMA ----- 12.005053275522988
----- Test RMSE for DEMA----- 6.144216052305218
----- Test_MSE_LSTM for DEMA----- 37.751390897405116
----- Test_MAE_LSTM for DEMA----- 4.610910381239713
----- Train RMSE for KAMA ----- 10.684929530335653
----- Train_MSE_LSTM for KAMA ----- 114.16771906823887
----- Train MAE LSTM for KAMA ----- 9.754891158846936
----- Test RMSE for KAMA----- 6.034609740001348
----- Test_MSE_LSTM for KAMA----- 36.41651471411913
----- Test_MAE_LSTM for KAMA----- 4.797119170641582
----- Train RMSE for MIDPOINT ----- 9.606108576890206
----- Train_MSE_LSTM for MIDPOINT ----- 92.27732199100357
----- Train MAE LSTM for MIDPOINT ----- 8.582213228909325
----- Test RMSE for MIDPOINT----- 11.927844685264063
----- Test_MSE_LSTM for MIDPOINT----- 142.27347883578213
----- Test_MAE_LSTM for MIDPOINT----- 10.30348805298139
----- Train RMSE for T3 ----- 12.127978025318718
----- Train_MSE_LSTM for T3 ----- 147.0878509826137
----- Train MAE LSTM for T3 ----- 10.973312625555451
----- Test RMSE for T3----- 6.135206291200104
----- Test_MSE_LSTM for T3----- 37.64075623558134
----- Test_MAE_LSTM for T3----- 5.0145827751195515
----- Train RMSE for TEMA ----- 7.4737545545252155
----- Train_MSE_LSTM for TEMA ----- 55.8570071412864
----- Train MAE LSTM for TEMA ----- 5.174422443364885
----- Test RMSE for TEMA----- 7.856451372427962
----- Test_MSE_LSTM for TEMA----- 61.72382816732521
----- Test_MAE_LSTM for TEMA----- 7.166001671992897

Univariate Arima Multistep MutiVariate LSTM Hybrid Model Experiment 4

From the above experiments it is evident that with Higher moving averages the loss plots show unreoresented data and underfitting, hence keeping only the MA's that have smaller periods like T3 OR TRIMA. Going forward EMA, WMA & DEMA will be ignored.

In [ ]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det = 20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # # Option 1
    # # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    # model.add(Dense(units=64,activation='relu'))
    # model.add(Dropout(0.5))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    # ## Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()


    # # option 2
    # model = Sequential()
    # model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    # model.add(Dense(64))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()




    # # Option 3
    # # define custom activation
    # # 
    # class Double_Tanh(Activation):
    #     def __init__(self, activation, **kwargs):
    #         super(Double_Tanh, self).__init__(activation, **kwargs)
    #         self.__name__ = 'double_tanh'

    # def double_tanh(x):
    #     return (K.tanh(x) * 2)

    # get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
    #     # Model Generation
    # model = Sequential()
    # #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    # model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    # model.add(Dense(1))
    # model.add(Activation(double_tanh))
    # model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    model = Sequential()
    model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(input_dim, feature_size)))
    model.add(LSTM(units=int(lstm_len/2)))
    model.add(Dense(1, activation='sigmoid'))
    model.compile(loss='mean_squared_error', optimizer='adam')
    # Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM4.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = (y_scaler.inverse_transform(predictiontr)-det).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte =( y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [ ]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation4 = {}
    imgfile = 'Experiment4'
    for ma in optimized_period:
              print(ma)
              print(functions[ma])
              print ( int( optimized_period[ma]))
            # if ma == 'SMA':
              low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
              low_vol = low_vol.fillna(0)
              low_vol_data = df['close']
              high_vol = pd.DataFrame()
              df2 = df.copy()
              for i in df2.columns:
                if i in low_vol.columns:
                  high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
              high_vol_data = df['close']
              ## *****************************************************
              # Generate ARIMA and LSTM predictions
              print('\nWorking on ' + ma + ' predictions')
              try:
                print('parameters used : ', train_len, test_len)
                low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
              except:
                  print('ARIMA error, skipping to next MA type')
                  continue
              Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
              final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
              mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
              rmse_ftr = mse_ftr ** 0.5
              mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
              mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

              final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
              mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
              rmse = mse ** 0.5
              mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              # Generate prediction accuracy
              actual = df['close'].tail(test_len).values
              result_1 = []
              result_2 = []
              for i in range(1, len(final_prediction)):
                  # Compare prediction to previous close price
                  if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                      result_1.append(1)
                  elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                      result_1.append(1)
                  else:
                      result_1.append(0)

                  # Compare prediction to previous prediction
                  if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                      result_2.append(1)
                  elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                      result_2.append(1)
                  else:
                      result_2.append(0)

              accuracy_1 = np.mean(result_1)
              accuracy_2 = np.mean(result_2)

              simulation4[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                            'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                            'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                            'rmse': rmse_ftr, 'mae' : mae_ftr},
                                'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                          'rmse': rmse, 'mae': mae },
                                'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

              # save simulation data here as checkpoint
              with open('simulation4_data.json', 'w') as fp:
                  json.dump(simulation4, fp)

              for ma in simulation4.keys():
                  print('\n' + ma)
                  print('Prediction vs Close:\t\t' + str(round(100*simulation4[ma]['accuracy']['prediction vs close'], 2))
                        + '% Accuracy')
                  print('Prediction vs Prediction:\t' + str(round(100*simulation4[ma]['accuracy']['prediction vs prediction'], 2))
                        + '% Accuracy')
                  print('MSE:\t', simulation4[ma]['final']['mse'],
                        '\nRMSE:\t', simulation4[ma]['final']['rmse'],
                        '\nMAPE:\t', simulation4[ma]['final']['mae'])#,
                        # '\nMAPE:\t', simulation[ma]['final']['mape'])
            # else:
            #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.63 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4157.020, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3687.148, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.27 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3458.651, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3322.133, Time=0.13 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.96 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.04 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3324.133, Time=0.27 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.514 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1657.067
Date:                Sun, 12 Dec 2021   AIC                           3322.133
Time:                        19:45:22   BIC                           3340.897
Sample:                             0   HQIC                          3329.339
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1966      0.003   -387.226      0.000      -1.203      -1.191
ar.L2         -0.8952      0.006   -138.692      0.000      -0.908      -0.883
ar.L3         -0.3968      0.006    -68.284      0.000      -0.408      -0.385
sigma2         3.5858      0.017    214.535      0.000       3.553       3.619
===================================================================================
Ljung-Box (L1) (Q):                  14.47   Jarque-Bera (JB):           2428881.42
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       271.99
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04624, saving model to LSTM4.h5
48/48 - 5s - loss: 1.3658 - val_loss: 0.0462 - lr: 0.0010 - 5s/epoch - 108ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04624
48/48 - 0s - loss: 1.2984 - val_loss: 0.0485 - lr: 0.0010 - 339ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04624
48/48 - 0s - loss: 1.2274 - val_loss: 0.0520 - lr: 0.0010 - 366ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04624
48/48 - 0s - loss: 1.1638 - val_loss: 0.0580 - lr: 0.0010 - 348ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04624
48/48 - 0s - loss: 1.1055 - val_loss: 0.0645 - lr: 0.0010 - 334ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04624
48/48 - 0s - loss: 1.0474 - val_loss: 0.0694 - lr: 0.0010 - 359ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04624
48/48 - 0s - loss: 1.0133 - val_loss: 0.0699 - lr: 1.0000e-04 - 353ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04624
48/48 - 0s - loss: 1.0086 - val_loss: 0.0704 - lr: 1.0000e-04 - 328ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04624
48/48 - 0s - loss: 1.0040 - val_loss: 0.0710 - lr: 1.0000e-04 - 354ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9995 - val_loss: 0.0715 - lr: 1.0000e-04 - 341ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9951 - val_loss: 0.0721 - lr: 1.0000e-04 - 333ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9924 - val_loss: 0.0721 - lr: 1.0000e-05 - 342ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9920 - val_loss: 0.0722 - lr: 1.0000e-05 - 349ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9915 - val_loss: 0.0723 - lr: 1.0000e-05 - 353ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9911 - val_loss: 0.0723 - lr: 1.0000e-05 - 364ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9907 - val_loss: 0.0724 - lr: 1.0000e-05 - 355ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9902 - val_loss: 0.0724 - lr: 1.0000e-05 - 352ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9898 - val_loss: 0.0725 - lr: 1.0000e-05 - 360ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9893 - val_loss: 0.0726 - lr: 1.0000e-05 - 340ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9889 - val_loss: 0.0726 - lr: 1.0000e-05 - 360ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9885 - val_loss: 0.0727 - lr: 1.0000e-05 - 351ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9880 - val_loss: 0.0728 - lr: 1.0000e-05 - 340ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9876 - val_loss: 0.0729 - lr: 1.0000e-05 - 354ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9871 - val_loss: 0.0729 - lr: 1.0000e-05 - 353ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9867 - val_loss: 0.0730 - lr: 1.0000e-05 - 335ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9863 - val_loss: 0.0731 - lr: 1.0000e-05 - 353ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9858 - val_loss: 0.0731 - lr: 1.0000e-05 - 361ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9854 - val_loss: 0.0732 - lr: 1.0000e-05 - 344ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9849 - val_loss: 0.0733 - lr: 1.0000e-05 - 381ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9845 - val_loss: 0.0733 - lr: 1.0000e-05 - 356ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9840 - val_loss: 0.0734 - lr: 1.0000e-05 - 342ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9836 - val_loss: 0.0735 - lr: 1.0000e-05 - 362ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9832 - val_loss: 0.0736 - lr: 1.0000e-05 - 349ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9827 - val_loss: 0.0736 - lr: 1.0000e-05 - 325ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9823 - val_loss: 0.0737 - lr: 1.0000e-05 - 361ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9818 - val_loss: 0.0738 - lr: 1.0000e-05 - 341ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9814 - val_loss: 0.0739 - lr: 1.0000e-05 - 345ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9810 - val_loss: 0.0740 - lr: 1.0000e-05 - 344ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9805 - val_loss: 0.0740 - lr: 1.0000e-05 - 345ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9801 - val_loss: 0.0741 - lr: 1.0000e-05 - 346ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9796 - val_loss: 0.0742 - lr: 1.0000e-05 - 345ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9792 - val_loss: 0.0743 - lr: 1.0000e-05 - 349ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9788 - val_loss: 0.0743 - lr: 1.0000e-05 - 356ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9783 - val_loss: 0.0744 - lr: 1.0000e-05 - 336ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9779 - val_loss: 0.0745 - lr: 1.0000e-05 - 345ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9774 - val_loss: 0.0746 - lr: 1.0000e-05 - 350ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9770 - val_loss: 0.0747 - lr: 1.0000e-05 - 356ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9766 - val_loss: 0.0747 - lr: 1.0000e-05 - 354ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9761 - val_loss: 0.0748 - lr: 1.0000e-05 - 363ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9757 - val_loss: 0.0749 - lr: 1.0000e-05 - 346ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04624
48/48 - 0s - loss: 0.9753 - val_loss: 0.0750 - lr: 1.0000e-05 - 340ms/epoch - 7ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 25.538190210858644 
RMSE:	 5.053532448778641 
MAPE:	 3.962093187177072
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.57 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4231.556, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3761.238, Time=0.07 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.37 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3532.227, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3394.496, Time=0.12 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.09 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.84 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3396.496, Time=0.26 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.466 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1693.248
Date:                Sun, 12 Dec 2021   AIC                           3394.496
Time:                        19:47:13   BIC                           3413.260
Sample:                             0   HQIC                          3401.702
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.569      0.000      -1.204      -1.192
ar.L2         -0.8976      0.006   -139.811      0.000      -0.910      -0.885
ar.L3         -0.3984      0.006    -68.662      0.000      -0.410      -0.387
sigma2         3.9230      0.018    215.372      0.000       3.887       3.959
===================================================================================
Ljung-Box (L1) (Q):                  14.54   Jarque-Bera (JB):           2462173.05
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.82
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04898, saving model to LSTM4.h5
16/16 - 5s - loss: 1.4313 - val_loss: 0.0490 - lr: 0.0010 - 5s/epoch - 290ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.4217 - val_loss: 0.0497 - lr: 0.0010 - 124ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.4128 - val_loss: 0.0504 - lr: 0.0010 - 150ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.4040 - val_loss: 0.0511 - lr: 0.0010 - 134ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3945 - val_loss: 0.0518 - lr: 0.0010 - 132ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3828 - val_loss: 0.0524 - lr: 0.0010 - 132ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3731 - val_loss: 0.0524 - lr: 1.0000e-04 - 138ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3714 - val_loss: 0.0525 - lr: 1.0000e-04 - 133ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3698 - val_loss: 0.0526 - lr: 1.0000e-04 - 141ms/epoch - 9ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3682 - val_loss: 0.0526 - lr: 1.0000e-04 - 145ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3666 - val_loss: 0.0527 - lr: 1.0000e-04 - 138ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3656 - val_loss: 0.0527 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3654 - val_loss: 0.0527 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3653 - val_loss: 0.0527 - lr: 1.0000e-05 - 140ms/epoch - 9ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3651 - val_loss: 0.0527 - lr: 1.0000e-05 - 143ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3650 - val_loss: 0.0527 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3648 - val_loss: 0.0527 - lr: 1.0000e-05 - 144ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3647 - val_loss: 0.0527 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3645 - val_loss: 0.0528 - lr: 1.0000e-05 - 138ms/epoch - 9ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3644 - val_loss: 0.0528 - lr: 1.0000e-05 - 138ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3642 - val_loss: 0.0528 - lr: 1.0000e-05 - 140ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3641 - val_loss: 0.0528 - lr: 1.0000e-05 - 142ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3639 - val_loss: 0.0528 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3638 - val_loss: 0.0528 - lr: 1.0000e-05 - 137ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3636 - val_loss: 0.0528 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3635 - val_loss: 0.0528 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3633 - val_loss: 0.0528 - lr: 1.0000e-05 - 137ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3632 - val_loss: 0.0528 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3630 - val_loss: 0.0528 - lr: 1.0000e-05 - 142ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3629 - val_loss: 0.0528 - lr: 1.0000e-05 - 124ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3628 - val_loss: 0.0528 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3626 - val_loss: 0.0528 - lr: 1.0000e-05 - 141ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3625 - val_loss: 0.0529 - lr: 1.0000e-05 - 141ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3623 - val_loss: 0.0529 - lr: 1.0000e-05 - 136ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3622 - val_loss: 0.0529 - lr: 1.0000e-05 - 140ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3620 - val_loss: 0.0529 - lr: 1.0000e-05 - 140ms/epoch - 9ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3619 - val_loss: 0.0529 - lr: 1.0000e-05 - 139ms/epoch - 9ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3617 - val_loss: 0.0529 - lr: 1.0000e-05 - 142ms/epoch - 9ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3616 - val_loss: 0.0529 - lr: 1.0000e-05 - 154ms/epoch - 10ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3615 - val_loss: 0.0529 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3613 - val_loss: 0.0529 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3612 - val_loss: 0.0529 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3610 - val_loss: 0.0529 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3609 - val_loss: 0.0529 - lr: 1.0000e-05 - 151ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3607 - val_loss: 0.0529 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3606 - val_loss: 0.0530 - lr: 1.0000e-05 - 143ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3604 - val_loss: 0.0530 - lr: 1.0000e-05 - 174ms/epoch - 11ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3603 - val_loss: 0.0530 - lr: 1.0000e-05 - 139ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3602 - val_loss: 0.0530 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3600 - val_loss: 0.0530 - lr: 1.0000e-05 - 139ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04898
16/16 - 0s - loss: 1.3599 - val_loss: 0.0530 - lr: 1.0000e-05 - 136ms/epoch - 8ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 25.538190210858644 
RMSE:	 5.053532448778641 
MAPE:	 3.962093187177072

EMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 37.273178456615135 
RMSE:	 6.105176365725656 
MAPE:	 4.793024214680565
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.56 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4264.089, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3793.930, Time=0.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.33 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3564.923, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3427.258, Time=0.13 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.66 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.61 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3429.258, Time=0.24 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.704 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1709.629
Date:                Sun, 12 Dec 2021   AIC                           3427.258
Time:                        19:48:54   BIC                           3446.021
Sample:                             0   HQIC                          3434.464
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1981      0.003   -389.386      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.699      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.737      0.000      -0.410      -0.387
sigma2         4.0860      0.019    215.311      0.000       4.049       4.123
===================================================================================
Ljung-Box (L1) (Q):                  14.57   Jarque-Bera (JB):           2460901.70
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.75
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04201, saving model to LSTM4.h5
17/17 - 5s - loss: 1.2648 - val_loss: 0.0420 - lr: 0.0010 - 5s/epoch - 276ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04201
17/17 - 0s - loss: 1.1906 - val_loss: 0.0431 - lr: 0.0010 - 162ms/epoch - 10ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04201
17/17 - 0s - loss: 1.1270 - val_loss: 0.0444 - lr: 0.0010 - 158ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04201
17/17 - 0s - loss: 1.0709 - val_loss: 0.0459 - lr: 0.0010 - 142ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04201
17/17 - 0s - loss: 1.0208 - val_loss: 0.0475 - lr: 0.0010 - 132ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9757 - val_loss: 0.0493 - lr: 0.0010 - 140ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9491 - val_loss: 0.0494 - lr: 1.0000e-04 - 140ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9453 - val_loss: 0.0496 - lr: 1.0000e-04 - 185ms/epoch - 11ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9416 - val_loss: 0.0498 - lr: 1.0000e-04 - 158ms/epoch - 9ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9380 - val_loss: 0.0500 - lr: 1.0000e-04 - 147ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9344 - val_loss: 0.0501 - lr: 1.0000e-04 - 144ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9321 - val_loss: 0.0502 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9317 - val_loss: 0.0502 - lr: 1.0000e-05 - 142ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9314 - val_loss: 0.0502 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9310 - val_loss: 0.0502 - lr: 1.0000e-05 - 159ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9307 - val_loss: 0.0502 - lr: 1.0000e-05 - 156ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9303 - val_loss: 0.0503 - lr: 1.0000e-05 - 141ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9300 - val_loss: 0.0503 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9296 - val_loss: 0.0503 - lr: 1.0000e-05 - 177ms/epoch - 10ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9292 - val_loss: 0.0503 - lr: 1.0000e-05 - 142ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9289 - val_loss: 0.0503 - lr: 1.0000e-05 - 175ms/epoch - 10ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9285 - val_loss: 0.0504 - lr: 1.0000e-05 - 154ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9281 - val_loss: 0.0504 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9278 - val_loss: 0.0504 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9274 - val_loss: 0.0504 - lr: 1.0000e-05 - 139ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9270 - val_loss: 0.0505 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9267 - val_loss: 0.0505 - lr: 1.0000e-05 - 188ms/epoch - 11ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9263 - val_loss: 0.0505 - lr: 1.0000e-05 - 197ms/epoch - 12ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9259 - val_loss: 0.0505 - lr: 1.0000e-05 - 139ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9256 - val_loss: 0.0505 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9252 - val_loss: 0.0506 - lr: 1.0000e-05 - 139ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9248 - val_loss: 0.0506 - lr: 1.0000e-05 - 143ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9245 - val_loss: 0.0506 - lr: 1.0000e-05 - 144ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9241 - val_loss: 0.0506 - lr: 1.0000e-05 - 184ms/epoch - 11ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9237 - val_loss: 0.0507 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9234 - val_loss: 0.0507 - lr: 1.0000e-05 - 141ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9230 - val_loss: 0.0507 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9226 - val_loss: 0.0507 - lr: 1.0000e-05 - 143ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9223 - val_loss: 0.0508 - lr: 1.0000e-05 - 140ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9219 - val_loss: 0.0508 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9215 - val_loss: 0.0508 - lr: 1.0000e-05 - 175ms/epoch - 10ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9212 - val_loss: 0.0508 - lr: 1.0000e-05 - 143ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9208 - val_loss: 0.0509 - lr: 1.0000e-05 - 137ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9204 - val_loss: 0.0509 - lr: 1.0000e-05 - 140ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9201 - val_loss: 0.0509 - lr: 1.0000e-05 - 141ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9197 - val_loss: 0.0509 - lr: 1.0000e-05 - 181ms/epoch - 11ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9193 - val_loss: 0.0510 - lr: 1.0000e-05 - 136ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9190 - val_loss: 0.0510 - lr: 1.0000e-05 - 162ms/epoch - 10ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9186 - val_loss: 0.0510 - lr: 1.0000e-05 - 141ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9182 - val_loss: 0.0510 - lr: 1.0000e-05 - 179ms/epoch - 11ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04201
17/17 - 0s - loss: 0.9179 - val_loss: 0.0511 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 25.538190210858644 
RMSE:	 5.053532448778641 
MAPE:	 3.962093187177072

EMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 37.273178456615135 
RMSE:	 6.105176365725656 
MAPE:	 4.793024214680565

WMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 55.57385349960206 
RMSE:	 7.454787287347779 
MAPE:	 6.073522122630416
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.56 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4436.126, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3965.317, Time=0.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.52 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3736.589, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3598.951, Time=0.10 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.23 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.20 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3600.951, Time=0.25 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.032 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1795.475
Date:                Sun, 12 Dec 2021   AIC                           3598.951
Time:                        19:50:36   BIC                           3617.714
Sample:                             0   HQIC                          3606.157
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1983      0.003   -389.581      0.000      -1.204      -1.192
ar.L2         -0.8973      0.006   -139.732      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.649      0.000      -0.410      -0.387
sigma2         5.0573      0.023    215.292      0.000       5.011       5.103
===================================================================================
Ljung-Box (L1) (Q):                  14.41   Jarque-Bera (JB):           2460553.80
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.89
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.74
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05146, saving model to LSTM4.h5
10/10 - 5s - loss: 1.3440 - val_loss: 0.0515 - lr: 0.0010 - 5s/epoch - 504ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.3119 - val_loss: 0.0516 - lr: 0.0010 - 97ms/epoch - 10ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.2835 - val_loss: 0.0517 - lr: 0.0010 - 84ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.2575 - val_loss: 0.0520 - lr: 0.0010 - 102ms/epoch - 10ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.2329 - val_loss: 0.0523 - lr: 0.0010 - 95ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.2091 - val_loss: 0.0528 - lr: 0.0010 - 90ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1925 - val_loss: 0.0529 - lr: 1.0000e-04 - 104ms/epoch - 10ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1901 - val_loss: 0.0529 - lr: 1.0000e-04 - 101ms/epoch - 10ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1878 - val_loss: 0.0530 - lr: 1.0000e-04 - 113ms/epoch - 11ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1855 - val_loss: 0.0530 - lr: 1.0000e-04 - 103ms/epoch - 10ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1832 - val_loss: 0.0531 - lr: 1.0000e-04 - 87ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1815 - val_loss: 0.0531 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1813 - val_loss: 0.0531 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1811 - val_loss: 0.0531 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1809 - val_loss: 0.0531 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1806 - val_loss: 0.0531 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1804 - val_loss: 0.0531 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1802 - val_loss: 0.0531 - lr: 1.0000e-05 - 106ms/epoch - 11ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1799 - val_loss: 0.0531 - lr: 1.0000e-05 - 110ms/epoch - 11ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1797 - val_loss: 0.0531 - lr: 1.0000e-05 - 99ms/epoch - 10ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1795 - val_loss: 0.0531 - lr: 1.0000e-05 - 98ms/epoch - 10ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1792 - val_loss: 0.0531 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1790 - val_loss: 0.0531 - lr: 1.0000e-05 - 90ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1788 - val_loss: 0.0531 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1785 - val_loss: 0.0532 - lr: 1.0000e-05 - 106ms/epoch - 11ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1783 - val_loss: 0.0532 - lr: 1.0000e-05 - 106ms/epoch - 11ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1781 - val_loss: 0.0532 - lr: 1.0000e-05 - 104ms/epoch - 10ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1778 - val_loss: 0.0532 - lr: 1.0000e-05 - 97ms/epoch - 10ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1776 - val_loss: 0.0532 - lr: 1.0000e-05 - 114ms/epoch - 11ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1774 - val_loss: 0.0532 - lr: 1.0000e-05 - 95ms/epoch - 10ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1771 - val_loss: 0.0532 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1769 - val_loss: 0.0532 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1767 - val_loss: 0.0532 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1764 - val_loss: 0.0532 - lr: 1.0000e-05 - 95ms/epoch - 10ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1762 - val_loss: 0.0532 - lr: 1.0000e-05 - 111ms/epoch - 11ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1760 - val_loss: 0.0532 - lr: 1.0000e-05 - 111ms/epoch - 11ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1757 - val_loss: 0.0532 - lr: 1.0000e-05 - 113ms/epoch - 11ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1755 - val_loss: 0.0532 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1753 - val_loss: 0.0532 - lr: 1.0000e-05 - 107ms/epoch - 11ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1750 - val_loss: 0.0532 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1748 - val_loss: 0.0532 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1746 - val_loss: 0.0532 - lr: 1.0000e-05 - 97ms/epoch - 10ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1743 - val_loss: 0.0532 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1741 - val_loss: 0.0533 - lr: 1.0000e-05 - 107ms/epoch - 11ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1739 - val_loss: 0.0533 - lr: 1.0000e-05 - 108ms/epoch - 11ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1736 - val_loss: 0.0533 - lr: 1.0000e-05 - 108ms/epoch - 11ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1734 - val_loss: 0.0533 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1732 - val_loss: 0.0533 - lr: 1.0000e-05 - 118ms/epoch - 12ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1729 - val_loss: 0.0533 - lr: 1.0000e-05 - 103ms/epoch - 10ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1727 - val_loss: 0.0533 - lr: 1.0000e-05 - 98ms/epoch - 10ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.05146
10/10 - 0s - loss: 1.1725 - val_loss: 0.0533 - lr: 1.0000e-05 - 95ms/epoch - 9ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 25.538190210858644 
RMSE:	 5.053532448778641 
MAPE:	 3.962093187177072

EMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 37.273178456615135 
RMSE:	 6.105176365725656 
MAPE:	 4.793024214680565

WMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 55.57385349960206 
RMSE:	 7.454787287347779 
MAPE:	 6.073522122630416

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 141.86028017408532 
RMSE:	 11.910511331344482 
MAPE:	 10.597824101027992
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.50 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4190.464, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3724.371, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.36 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3494.154, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3357.435, Time=0.12 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.53 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.97 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3359.435, Time=0.31 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.994 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1674.717
Date:                Sun, 12 Dec 2021   AIC                           3357.435
Time:                        19:52:08   BIC                           3376.198
Sample:                             0   HQIC                          3364.641
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1955      0.003   -381.246      0.000      -1.202      -1.189
ar.L2         -0.8964      0.007   -135.835      0.000      -0.909      -0.883
ar.L3         -0.3971      0.006    -67.229      0.000      -0.409      -0.385
sigma2         3.7466      0.018    211.623      0.000       3.712       3.781
===================================================================================
Ljung-Box (L1) (Q):                  14.20   Jarque-Bera (JB):           2338363.32
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                             3.76
Prob(H) (two-sided):                  0.00   Kurtosis:                       266.93
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05012, saving model to LSTM4.h5
45/45 - 5s - loss: 1.3770 - val_loss: 0.0501 - lr: 0.0010 - 5s/epoch - 115ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.05012
45/45 - 0s - loss: 1.2695 - val_loss: 0.0517 - lr: 0.0010 - 321ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.05012
45/45 - 0s - loss: 1.1862 - val_loss: 0.0536 - lr: 0.0010 - 356ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.05012
45/45 - 0s - loss: 1.1040 - val_loss: 0.0563 - lr: 0.0010 - 335ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.05012
45/45 - 0s - loss: 1.0189 - val_loss: 0.0600 - lr: 0.0010 - 343ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9590 - val_loss: 0.0639 - lr: 0.0010 - 340ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9312 - val_loss: 0.0643 - lr: 1.0000e-04 - 318ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9273 - val_loss: 0.0647 - lr: 1.0000e-04 - 338ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9234 - val_loss: 0.0651 - lr: 1.0000e-04 - 342ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9196 - val_loss: 0.0655 - lr: 1.0000e-04 - 348ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9159 - val_loss: 0.0660 - lr: 1.0000e-04 - 324ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9136 - val_loss: 0.0660 - lr: 1.0000e-05 - 351ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9133 - val_loss: 0.0661 - lr: 1.0000e-05 - 324ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9129 - val_loss: 0.0661 - lr: 1.0000e-05 - 323ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9125 - val_loss: 0.0662 - lr: 1.0000e-05 - 353ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9121 - val_loss: 0.0662 - lr: 1.0000e-05 - 327ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9118 - val_loss: 0.0663 - lr: 1.0000e-05 - 320ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9114 - val_loss: 0.0663 - lr: 1.0000e-05 - 363ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9110 - val_loss: 0.0664 - lr: 1.0000e-05 - 345ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9106 - val_loss: 0.0664 - lr: 1.0000e-05 - 319ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9102 - val_loss: 0.0665 - lr: 1.0000e-05 - 355ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9098 - val_loss: 0.0665 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9095 - val_loss: 0.0666 - lr: 1.0000e-05 - 339ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9091 - val_loss: 0.0667 - lr: 1.0000e-05 - 350ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9087 - val_loss: 0.0667 - lr: 1.0000e-05 - 316ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9083 - val_loss: 0.0668 - lr: 1.0000e-05 - 327ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9079 - val_loss: 0.0668 - lr: 1.0000e-05 - 345ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9075 - val_loss: 0.0669 - lr: 1.0000e-05 - 311ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9071 - val_loss: 0.0669 - lr: 1.0000e-05 - 313ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9067 - val_loss: 0.0670 - lr: 1.0000e-05 - 371ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9063 - val_loss: 0.0671 - lr: 1.0000e-05 - 316ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9059 - val_loss: 0.0671 - lr: 1.0000e-05 - 312ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9056 - val_loss: 0.0672 - lr: 1.0000e-05 - 351ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9052 - val_loss: 0.0672 - lr: 1.0000e-05 - 311ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9048 - val_loss: 0.0673 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9044 - val_loss: 0.0674 - lr: 1.0000e-05 - 352ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9040 - val_loss: 0.0674 - lr: 1.0000e-05 - 327ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9036 - val_loss: 0.0675 - lr: 1.0000e-05 - 318ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9032 - val_loss: 0.0676 - lr: 1.0000e-05 - 339ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9028 - val_loss: 0.0676 - lr: 1.0000e-05 - 318ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9024 - val_loss: 0.0677 - lr: 1.0000e-05 - 330ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9020 - val_loss: 0.0678 - lr: 1.0000e-05 - 343ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9016 - val_loss: 0.0678 - lr: 1.0000e-05 - 324ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9012 - val_loss: 0.0679 - lr: 1.0000e-05 - 325ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9008 - val_loss: 0.0680 - lr: 1.0000e-05 - 333ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9005 - val_loss: 0.0680 - lr: 1.0000e-05 - 324ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.9001 - val_loss: 0.0681 - lr: 1.0000e-05 - 318ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.8997 - val_loss: 0.0682 - lr: 1.0000e-05 - 344ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.8993 - val_loss: 0.0682 - lr: 1.0000e-05 - 314ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.8989 - val_loss: 0.0683 - lr: 1.0000e-05 - 332ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.05012
45/45 - 0s - loss: 0.8985 - val_loss: 0.0684 - lr: 1.0000e-05 - 347ms/epoch - 8ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 25.538190210858644 
RMSE:	 5.053532448778641 
MAPE:	 3.962093187177072

EMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 37.273178456615135 
RMSE:	 6.105176365725656 
MAPE:	 4.793024214680565

WMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 55.57385349960206 
RMSE:	 7.454787287347779 
MAPE:	 6.073522122630416

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 141.86028017408532 
RMSE:	 11.910511331344482 
MAPE:	 10.597824101027992

KAMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	50.0% Accuracy
MSE:	 22.996866158063977 
RMSE:	 4.795504786575025 
MAPE:	 3.79818199604115
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.48 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4212.289, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3747.746, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.33 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3523.401, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3387.759, Time=0.12 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.61 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.14 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3389.758, Time=0.25 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.131 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1689.879
Date:                Sun, 12 Dec 2021   AIC                           3387.759
Time:                        19:54:01   BIC                           3406.522
Sample:                             0   HQIC                          3394.964
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1878      0.003   -345.315      0.000      -1.195      -1.181
ar.L2         -0.8876      0.007   -121.809      0.000      -0.902      -0.873
ar.L3         -0.3957      0.007    -60.127      0.000      -0.409      -0.383
sigma2         3.8904      0.020    193.404      0.000       3.851       3.930
===================================================================================
Ljung-Box (L1) (Q):                  13.21   Jarque-Bera (JB):           1659080.01
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.08   Skew:                             3.28
Prob(H) (two-sided):                  0.00   Kurtosis:                       225.31
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05368, saving model to LSTM4.h5
58/58 - 5s - loss: 1.3406 - val_loss: 0.0537 - lr: 0.0010 - 5s/epoch - 88ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.05368
58/58 - 0s - loss: 1.1374 - val_loss: 0.0598 - lr: 0.0010 - 431ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.9817 - val_loss: 0.0682 - lr: 0.0010 - 419ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.8836 - val_loss: 0.0749 - lr: 0.0010 - 429ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.8153 - val_loss: 0.0796 - lr: 0.0010 - 416ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7656 - val_loss: 0.0839 - lr: 0.0010 - 422ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7445 - val_loss: 0.0844 - lr: 1.0000e-04 - 425ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7416 - val_loss: 0.0848 - lr: 1.0000e-04 - 418ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7388 - val_loss: 0.0853 - lr: 1.0000e-04 - 424ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.05368
58/58 - 1s - loss: 0.7361 - val_loss: 0.0859 - lr: 1.0000e-04 - 505ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7333 - val_loss: 0.0865 - lr: 1.0000e-04 - 429ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7317 - val_loss: 0.0865 - lr: 1.0000e-05 - 415ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7314 - val_loss: 0.0866 - lr: 1.0000e-05 - 428ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7311 - val_loss: 0.0866 - lr: 1.0000e-05 - 406ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7308 - val_loss: 0.0867 - lr: 1.0000e-05 - 427ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7305 - val_loss: 0.0868 - lr: 1.0000e-05 - 456ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7303 - val_loss: 0.0868 - lr: 1.0000e-05 - 446ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7300 - val_loss: 0.0869 - lr: 1.0000e-05 - 432ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7297 - val_loss: 0.0870 - lr: 1.0000e-05 - 412ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7294 - val_loss: 0.0871 - lr: 1.0000e-05 - 421ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7290 - val_loss: 0.0871 - lr: 1.0000e-05 - 403ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7287 - val_loss: 0.0872 - lr: 1.0000e-05 - 417ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7284 - val_loss: 0.0873 - lr: 1.0000e-05 - 409ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7281 - val_loss: 0.0874 - lr: 1.0000e-05 - 402ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7278 - val_loss: 0.0875 - lr: 1.0000e-05 - 424ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7275 - val_loss: 0.0875 - lr: 1.0000e-05 - 396ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7272 - val_loss: 0.0876 - lr: 1.0000e-05 - 423ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7268 - val_loss: 0.0877 - lr: 1.0000e-05 - 419ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7265 - val_loss: 0.0878 - lr: 1.0000e-05 - 425ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7262 - val_loss: 0.0879 - lr: 1.0000e-05 - 436ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7259 - val_loss: 0.0880 - lr: 1.0000e-05 - 421ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7255 - val_loss: 0.0881 - lr: 1.0000e-05 - 415ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7252 - val_loss: 0.0882 - lr: 1.0000e-05 - 414ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7249 - val_loss: 0.0883 - lr: 1.0000e-05 - 430ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7246 - val_loss: 0.0884 - lr: 1.0000e-05 - 403ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7242 - val_loss: 0.0885 - lr: 1.0000e-05 - 415ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7239 - val_loss: 0.0886 - lr: 1.0000e-05 - 407ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7236 - val_loss: 0.0887 - lr: 1.0000e-05 - 409ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7232 - val_loss: 0.0888 - lr: 1.0000e-05 - 441ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7229 - val_loss: 0.0889 - lr: 1.0000e-05 - 402ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7226 - val_loss: 0.0890 - lr: 1.0000e-05 - 430ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7222 - val_loss: 0.0891 - lr: 1.0000e-05 - 406ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7219 - val_loss: 0.0892 - lr: 1.0000e-05 - 411ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7216 - val_loss: 0.0893 - lr: 1.0000e-05 - 418ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7213 - val_loss: 0.0894 - lr: 1.0000e-05 - 402ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7209 - val_loss: 0.0896 - lr: 1.0000e-05 - 428ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7206 - val_loss: 0.0897 - lr: 1.0000e-05 - 400ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7203 - val_loss: 0.0898 - lr: 1.0000e-05 - 491ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7199 - val_loss: 0.0899 - lr: 1.0000e-05 - 403ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7196 - val_loss: 0.0900 - lr: 1.0000e-05 - 407ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.05368
58/58 - 0s - loss: 0.7193 - val_loss: 0.0901 - lr: 1.0000e-05 - 421ms/epoch - 7ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 25.538190210858644 
RMSE:	 5.053532448778641 
MAPE:	 3.962093187177072

EMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 37.273178456615135 
RMSE:	 6.105176365725656 
MAPE:	 4.793024214680565

WMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 55.57385349960206 
RMSE:	 7.454787287347779 
MAPE:	 6.073522122630416

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 141.86028017408532 
RMSE:	 11.910511331344482 
MAPE:	 10.597824101027992

KAMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	50.0% Accuracy
MSE:	 22.996866158063977 
RMSE:	 4.795504786575025 
MAPE:	 3.79818199604115

MIDPOINT
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 16.742777096749265 
RMSE:	 4.091793872710265 
MAPE:	 3.2612983803063513
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.50 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4414.515, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3944.062, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.48 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3715.173, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3577.471, Time=0.11 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.78 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.80 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3579.471, Time=0.24 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.118 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1784.736
Date:                Sun, 12 Dec 2021   AIC                           3577.471
Time:                        19:56:15   BIC                           3596.235
Sample:                             0   HQIC                          3584.677
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.844      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.861      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.862      0.000      -0.410      -0.387
sigma2         4.9242      0.023    215.469      0.000       4.879       4.969
===================================================================================
Ljung-Box (L1) (Q):                  14.55   Jarque-Bera (JB):           2468024.38
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       274.15
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.06273, saving model to LSTM4.h5
43/43 - 5s - loss: 1.4030 - val_loss: 0.0627 - lr: 0.0010 - 5s/epoch - 116ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.06273
43/43 - 0s - loss: 1.3258 - val_loss: 0.0675 - lr: 0.0010 - 308ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.06273
43/43 - 0s - loss: 1.2487 - val_loss: 0.0685 - lr: 0.0010 - 339ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.06273
43/43 - 0s - loss: 1.1322 - val_loss: 0.0641 - lr: 0.0010 - 322ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.9894 - val_loss: 0.0657 - lr: 0.0010 - 315ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.9122 - val_loss: 0.0694 - lr: 0.0010 - 344ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8823 - val_loss: 0.0698 - lr: 1.0000e-04 - 329ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8784 - val_loss: 0.0702 - lr: 1.0000e-04 - 318ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8746 - val_loss: 0.0706 - lr: 1.0000e-04 - 321ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8709 - val_loss: 0.0710 - lr: 1.0000e-04 - 319ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8673 - val_loss: 0.0715 - lr: 1.0000e-04 - 318ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8651 - val_loss: 0.0715 - lr: 1.0000e-05 - 348ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8647 - val_loss: 0.0716 - lr: 1.0000e-05 - 311ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8644 - val_loss: 0.0716 - lr: 1.0000e-05 - 324ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8640 - val_loss: 0.0717 - lr: 1.0000e-05 - 300ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8637 - val_loss: 0.0717 - lr: 1.0000e-05 - 312ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8633 - val_loss: 0.0718 - lr: 1.0000e-05 - 355ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8629 - val_loss: 0.0718 - lr: 1.0000e-05 - 321ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8626 - val_loss: 0.0719 - lr: 1.0000e-05 - 333ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8622 - val_loss: 0.0719 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8618 - val_loss: 0.0720 - lr: 1.0000e-05 - 325ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8614 - val_loss: 0.0720 - lr: 1.0000e-05 - 321ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8611 - val_loss: 0.0721 - lr: 1.0000e-05 - 327ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8607 - val_loss: 0.0721 - lr: 1.0000e-05 - 333ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8603 - val_loss: 0.0722 - lr: 1.0000e-05 - 314ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8599 - val_loss: 0.0722 - lr: 1.0000e-05 - 323ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8596 - val_loss: 0.0723 - lr: 1.0000e-05 - 335ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8592 - val_loss: 0.0723 - lr: 1.0000e-05 - 314ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8588 - val_loss: 0.0724 - lr: 1.0000e-05 - 346ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8584 - val_loss: 0.0725 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8581 - val_loss: 0.0725 - lr: 1.0000e-05 - 318ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8577 - val_loss: 0.0726 - lr: 1.0000e-05 - 321ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8573 - val_loss: 0.0726 - lr: 1.0000e-05 - 333ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8569 - val_loss: 0.0727 - lr: 1.0000e-05 - 311ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8565 - val_loss: 0.0728 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8562 - val_loss: 0.0728 - lr: 1.0000e-05 - 324ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8558 - val_loss: 0.0729 - lr: 1.0000e-05 - 319ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8554 - val_loss: 0.0729 - lr: 1.0000e-05 - 312ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8550 - val_loss: 0.0730 - lr: 1.0000e-05 - 345ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8546 - val_loss: 0.0731 - lr: 1.0000e-05 - 318ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8543 - val_loss: 0.0731 - lr: 1.0000e-05 - 312ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8539 - val_loss: 0.0732 - lr: 1.0000e-05 - 327ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8535 - val_loss: 0.0733 - lr: 1.0000e-05 - 314ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8531 - val_loss: 0.0733 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8528 - val_loss: 0.0734 - lr: 1.0000e-05 - 345ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8524 - val_loss: 0.0735 - lr: 1.0000e-05 - 315ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8520 - val_loss: 0.0735 - lr: 1.0000e-05 - 314ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8516 - val_loss: 0.0736 - lr: 1.0000e-05 - 349ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8512 - val_loss: 0.0736 - lr: 1.0000e-05 - 320ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8509 - val_loss: 0.0737 - lr: 1.0000e-05 - 323ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.06273
43/43 - 0s - loss: 0.8505 - val_loss: 0.0738 - lr: 1.0000e-05 - 335ms/epoch - 8ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 25.538190210858644 
RMSE:	 5.053532448778641 
MAPE:	 3.962093187177072

EMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 37.273178456615135 
RMSE:	 6.105176365725656 
MAPE:	 4.793024214680565

WMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 55.57385349960206 
RMSE:	 7.454787287347779 
MAPE:	 6.073522122630416

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 141.86028017408532 
RMSE:	 11.910511331344482 
MAPE:	 10.597824101027992

KAMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	50.0% Accuracy
MSE:	 22.996866158063977 
RMSE:	 4.795504786575025 
MAPE:	 3.79818199604115

MIDPOINT
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 16.742777096749265 
RMSE:	 4.091793872710265 
MAPE:	 3.2612983803063513

T3
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 60.65482492291343 
RMSE:	 7.788120756826606 
MAPE:	 6.228183408991355
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.63 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4352.703, Time=0.06 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3889.412, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.37 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3689.930, Time=0.07 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3574.245, Time=0.10 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.42 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.06 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3576.245, Time=0.26 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.035 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1783.123
Date:                Sun, 12 Dec 2021   AIC                           3574.245
Time:                        19:58:04   BIC                           3593.008
Sample:                             0   HQIC                          3581.451
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1480      0.004   -302.430      0.000      -1.155      -1.141
ar.L2         -0.8300      0.008    -99.682      0.000      -0.846      -0.814
ar.L3         -0.3687      0.007    -50.527      0.000      -0.383      -0.354
sigma2         4.9055      0.028    175.970      0.000       4.851       4.960
===================================================================================
Ljung-Box (L1) (Q):                  11.61   Jarque-Bera (JB):           1261976.58
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.16   Skew:                             2.52
Prob(H) (two-sided):                  0.00   Kurtosis:                       196.90
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05732, saving model to LSTM4.h5
90/90 - 6s - loss: 1.3448 - val_loss: 0.0573 - lr: 0.0010 - 6s/epoch - 62ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.05732
90/90 - 1s - loss: 1.2046 - val_loss: 0.0600 - lr: 0.0010 - 614ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.05732
90/90 - 1s - loss: 1.0384 - val_loss: 0.0684 - lr: 0.0010 - 619ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.9193 - val_loss: 0.0780 - lr: 0.0010 - 638ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.8481 - val_loss: 0.0876 - lr: 0.0010 - 623ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7990 - val_loss: 0.0971 - lr: 0.0010 - 629ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7750 - val_loss: 0.0981 - lr: 1.0000e-04 - 625ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7715 - val_loss: 0.0991 - lr: 1.0000e-04 - 632ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7678 - val_loss: 0.1001 - lr: 1.0000e-04 - 628ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7642 - val_loss: 0.1012 - lr: 1.0000e-04 - 617ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7606 - val_loss: 0.1024 - lr: 1.0000e-04 - 679ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7584 - val_loss: 0.1025 - lr: 1.0000e-05 - 641ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7581 - val_loss: 0.1026 - lr: 1.0000e-05 - 667ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7577 - val_loss: 0.1027 - lr: 1.0000e-05 - 635ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7573 - val_loss: 0.1028 - lr: 1.0000e-05 - 640ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7569 - val_loss: 0.1030 - lr: 1.0000e-05 - 604ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7566 - val_loss: 0.1031 - lr: 1.0000e-05 - 623ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7562 - val_loss: 0.1033 - lr: 1.0000e-05 - 625ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7558 - val_loss: 0.1034 - lr: 1.0000e-05 - 678ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7554 - val_loss: 0.1035 - lr: 1.0000e-05 - 624ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7550 - val_loss: 0.1037 - lr: 1.0000e-05 - 624ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7545 - val_loss: 0.1038 - lr: 1.0000e-05 - 688ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7541 - val_loss: 0.1040 - lr: 1.0000e-05 - 606ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7537 - val_loss: 0.1041 - lr: 1.0000e-05 - 631ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7533 - val_loss: 0.1043 - lr: 1.0000e-05 - 607ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7529 - val_loss: 0.1045 - lr: 1.0000e-05 - 679ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7525 - val_loss: 0.1046 - lr: 1.0000e-05 - 679ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7520 - val_loss: 0.1048 - lr: 1.0000e-05 - 663ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7516 - val_loss: 0.1050 - lr: 1.0000e-05 - 626ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7512 - val_loss: 0.1051 - lr: 1.0000e-05 - 623ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7508 - val_loss: 0.1053 - lr: 1.0000e-05 - 614ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7503 - val_loss: 0.1055 - lr: 1.0000e-05 - 611ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7499 - val_loss: 0.1057 - lr: 1.0000e-05 - 613ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7495 - val_loss: 0.1058 - lr: 1.0000e-05 - 616ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7490 - val_loss: 0.1060 - lr: 1.0000e-05 - 630ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7486 - val_loss: 0.1062 - lr: 1.0000e-05 - 617ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7482 - val_loss: 0.1064 - lr: 1.0000e-05 - 613ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7477 - val_loss: 0.1066 - lr: 1.0000e-05 - 614ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7473 - val_loss: 0.1068 - lr: 1.0000e-05 - 661ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7469 - val_loss: 0.1069 - lr: 1.0000e-05 - 640ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7465 - val_loss: 0.1071 - lr: 1.0000e-05 - 621ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7460 - val_loss: 0.1073 - lr: 1.0000e-05 - 659ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7456 - val_loss: 0.1075 - lr: 1.0000e-05 - 621ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7452 - val_loss: 0.1077 - lr: 1.0000e-05 - 661ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7447 - val_loss: 0.1079 - lr: 1.0000e-05 - 605ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7443 - val_loss: 0.1081 - lr: 1.0000e-05 - 632ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7439 - val_loss: 0.1083 - lr: 1.0000e-05 - 625ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7435 - val_loss: 0.1085 - lr: 1.0000e-05 - 673ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7430 - val_loss: 0.1087 - lr: 1.0000e-05 - 623ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7426 - val_loss: 0.1089 - lr: 1.0000e-05 - 619ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.05732
90/90 - 1s - loss: 0.7422 - val_loss: 0.1091 - lr: 1.0000e-05 - 681ms/epoch - 8ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 25.538190210858644 
RMSE:	 5.053532448778641 
MAPE:	 3.962093187177072

EMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 37.273178456615135 
RMSE:	 6.105176365725656 
MAPE:	 4.793024214680565

WMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 55.57385349960206 
RMSE:	 7.454787287347779 
MAPE:	 6.073522122630416

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 141.86028017408532 
RMSE:	 11.910511331344482 
MAPE:	 10.597824101027992

KAMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	50.0% Accuracy
MSE:	 22.996866158063977 
RMSE:	 4.795504786575025 
MAPE:	 3.79818199604115

MIDPOINT
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 16.742777096749265 
RMSE:	 4.091793872710265 
MAPE:	 3.2612983803063513

T3
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 60.65482492291343 
RMSE:	 7.788120756826606 
MAPE:	 6.228183408991355

TEMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 22.406031022375306 
RMSE:	 4.733500926626645 
MAPE:	 4.170481392757424
Runtime: mins: 14.782075325833329

Architecture Used

In [ ]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving Experiment4.png to Experiment4 (2).png
In [ ]:
img = cv2.imread('Experiment4.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[ ]:
<matplotlib.image.AxesImage at 0x7f4cbf8f6910>

Model Plots

In [163]:
with open('simulation4_data.json') as json_file:
    simulation4 = json.load(json_file)
fileimg = 'Experiment4'
In [164]:
for i in range(len(list(simulation4.keys()))):
  SIM = list(simulation4.keys())[i]
  plot_train(simulation4,SIM)
  plot_test(simulation4,SIM)
----- Train RMSE for SMA ----- 0.784831471471416
----- Train_MSE_LSTM for SMA ----- 0.615960438611988
----- Train MAE LSTM for SMA ----- 0.27441401292782014
----- Test RMSE for SMA----- 5.053532448778641
----- Test_MSE_LSTM for SMA----- 25.538190210858644
----- Test_MAE_LSTM for SMA----- 3.962093187177072
----- Train RMSE for EMA ----- 6.211147747647157
----- Train_MSE_LSTM for EMA ----- 38.57835634310235
----- Train MAE LSTM for EMA ----- 6.2109213394693805
----- Test RMSE for EMA----- 6.105176365725656
----- Test_MSE_LSTM for EMA----- 37.273178456615135
----- Test_MAE_LSTM for EMA----- 4.793024214680565
----- Train RMSE for WMA ----- 2.002139604905498
----- Train_MSE_LSTM for WMA ----- 4.0085629975311425
----- Train MAE LSTM for WMA ----- 1.0573712433918867
----- Test RMSE for WMA----- 7.454787287347779
----- Test_MSE_LSTM for WMA----- 55.57385349960206
----- Test_MAE_LSTM for WMA----- 6.073522122630416
----- Train RMSE for DEMA ----- 6.0909201621097
----- Train_MSE_LSTM for DEMA ----- 37.099308421194465
----- Train MAE LSTM for DEMA ----- 6.026646866656766
----- Test RMSE for DEMA----- 11.910511331344482
----- Test_MSE_LSTM for DEMA----- 141.86028017408532
----- Test_MAE_LSTM for DEMA----- 10.597824101027992
----- Train RMSE for KAMA ----- 2.1585215730508303
----- Train_MSE_LSTM for KAMA ----- 4.65921538132583
----- Train MAE LSTM for KAMA ----- 2.1121947434869144
----- Test RMSE for KAMA----- 4.795504786575025
----- Test_MSE_LSTM for KAMA----- 22.996866158063977
----- Test_MAE_LSTM for KAMA----- 3.79818199604115
----- Train RMSE for MIDPOINT ----- 4.3843958002580745
----- Train_MSE_LSTM for MIDPOINT ----- 19.222926533320642
----- Train MAE LSTM for MIDPOINT ----- 4.316875709165441
----- Test RMSE for MIDPOINT----- 4.091793872710265
----- Test_MSE_LSTM for MIDPOINT----- 16.742777096749265
----- Test_MAE_LSTM for MIDPOINT----- 3.2612983803063513
----- Train RMSE for T3 ----- 1.472725970486685
----- Train_MSE_LSTM for T3 ----- 2.1689217841459487
----- Train MAE LSTM for T3 ----- 0.6889241237451534
----- Test RMSE for T3----- 7.788120756826606
----- Test_MSE_LSTM for T3----- 60.65482492291343
----- Test_MAE_LSTM for T3----- 6.228183408991355
----- Train RMSE for TEMA ----- 1.001048964655797
----- Train_MSE_LSTM for TEMA ----- 1.002099029638443
----- Train MAE LSTM for TEMA ----- 0.5068751042432124
----- Test RMSE for TEMA----- 4.733500926626645
----- Test_MSE_LSTM for TEMA----- 22.406031022375306
----- Test_MAE_LSTM for TEMA----- 4.170481392757424

Arima w Exogenous Variable Multistep MutiVariate LSTM Hybrid Model Experiment 5

In [ ]:
def get_arima_exog(dataframe,original_data, train_len, test_len):    
    

    # prepare train and test data for exogenous vr
    X_value = pd.DataFrame(low_vol.iloc[:, :])
    y_value = pd.DataFrame(low_vol.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    X_scale_dataset = X_scaler.fit_transform(X_value)
    y_scale_dataset = y_scaler.fit_transform(y_value)
    # Get data and check shape
    # X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X_scale_dataset)
    y_train, y_test, = split_train_test(y_scale_dataset)
    yc_train,yc_test = split_train_test(low_vol_data)
    yc = yc_test.values.tolist()
    y_train_list = y_train.flatten().tolist()
    y_test_list = y_test.flatten().tolist()
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)

    # Initialize model
    model = auto_arima(y_train_list,exogenous  = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
            suppress_warnings=True,stepwise=True,seasonal=True)

      # Determine model parameters
    print(model.summary())
    model.fit(y_train_list,maxiter=200)
    order = model.get_params()['order']
    print('ARIMA order:', order, '\n')

      # Genereate predictions
    prediction = []
    for i in range(len(y_test_list)):
        model = pmdarima.ARIMA(order=order)
        model.fit(y_train_list)
        # print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')

        prediction.append(model.predict()[0])
        y_train_list.append(y_test_list[i])

    predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
    y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))

    # Generate error data
    mse = mean_squared_error(yc_test, predictionte)
    rmse = mse ** 0.5
    mae = mean_absolute_error(y_test_ , predictionte )
    return yc,predictionte.flatten().tolist(), mse, rmse, mae
In [ ]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det = 20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # Option 1
    # Set up & fit LSTM RNN
    model = Sequential()
    model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    model.add(Dense(units=64,activation='relu'))
    model.add(Dropout(0.5))
    model.add(Dense(units=output_dim))
    model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    ## Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM5.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()


    # # option 2
    # model = Sequential()
    # model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    # model.add(Dense(64))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 3
    # define custom activation
    # 
    # class Double_Tanh(Activation):
    #     def __init__(self, activation, **kwargs):
    #         super(Double_Tanh, self).__init__(activation, **kwargs)
    #         self.__name__ = 'double_tanh'

    # def double_tanh(x):
    #     return (K.tanh(x) * 2)

    # get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
    #     # Model Generation
    # model = Sequential()
    # #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    # model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    # model.add(Dense(1))
    # model.add(Activation(double_tanh))
    # model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(input_dim, feature_size)))
    # model.add(LSTM(units=int(lstm_len/2)))
    # model.add(Dense(1, activation='sigmoid'))
    # model.compile(loss='mean_squared_error', optimizer='adam')
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM5.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [ ]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation5 = {}
    imgfile = 'Experiment5'
    for ma in optimized_period:
                print(ma)
                print(functions[ma])
                print ( int( optimized_period[ma]))
              # if ma == 'SMA':
                low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
                low_vol = low_vol.fillna(0)
                low_vol_data = df['close']
                high_vol = pd.DataFrame()
                df2 = df.copy()
                for i in df2.columns:
                  if i in low_vol.columns:
                    high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
                high_vol_data = df['close']
                ## *****************************************************
                # Generate ARIMA and LSTM predictions
                print('\nWorking on ' + ma + ' predictions')
                try:
                  print('parameters used : ', train_len, test_len)
                  low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
                except:
                    print('ARIMA error, skipping to next MA type')
                    continue
                Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
                final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
                mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
                rmse_ftr = mse_ftr ** 0.5
                mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
                mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

                final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
                mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
                rmse = mse ** 0.5
                mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                # Generate prediction accuracy
                actual = df['close'].tail(test_len).values
                result_1 = []
                result_2 = []
                for i in range(1, len(final_prediction)):
                    # Compare prediction to previous close price
                    if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                        result_1.append(1)
                    elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                        result_1.append(1)
                    else:
                        result_1.append(0)

                    # Compare prediction to previous prediction
                    if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                        result_2.append(1)
                    elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                        result_2.append(1)
                    else:
                        result_2.append(0)

                accuracy_1 = np.mean(result_1)
                accuracy_2 = np.mean(result_2)

                simulation5[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                              'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                  'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                              'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                  'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                              'rmse': rmse_ftr, 'mae' : mae_ftr},
                                  'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                            'rmse': rmse, 'mae': mae },
                                  'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

                # save simulation data here as checkpoint
                with open('simulation5_data.json', 'w') as fp:
                    json.dump(simulation5, fp)

                for ma in simulation5.keys():
                    print('\n' + ma)
                    print('Prediction vs Close:\t\t' + str(round(100*simulation5[ma]['accuracy']['prediction vs close'], 2))
                          + '% Accuracy')
                    print('Prediction vs Prediction:\t' + str(round(100*simulation5[ma]['accuracy']['prediction vs prediction'], 2))
                          + '% Accuracy')
                    print('MSE:\t', simulation5[ma]['final']['mse'],
                          '\nRMSE:\t', simulation5[ma]['final']['rmse'],
                          '\nMAPE:\t', simulation5[ma]['final']['mae'])#,
                          # '\nMAPE:\t', simulation[ma]['final']['mape'])
              # else:
              #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/3600)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-15057.252, Time=5.23 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-13616.841, Time=2.94 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15177.809, Time=11.07 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14725.568, Time=11.58 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-15511.840, Time=16.61 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-15663.563, Time=17.81 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-15093.498, Time=7.72 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15194.504, Time=11.11 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=-14885.340, Time=20.76 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 104.854 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood                7855.782
Date:                Sun, 12 Dec 2021   AIC                         -15663.563
Time:                        20:06:29   BIC                         -15550.983
Sample:                             0   HQIC                        -15620.328
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -1.202e-05   4.78e-05     -0.251      0.801      -0.000    8.17e-05
x2         -1.202e-05   2.63e-05     -0.458      0.647   -6.35e-05    3.95e-05
x3          -1.21e-05      0.000     -0.118      0.906      -0.000       0.000
x4             1.0000   3.59e-05   2.79e+04      0.000       1.000       1.000
x5         -1.149e-05   3.47e-05     -0.332      0.740   -7.94e-05    5.65e-05
x6         -1.354e-05   2.94e-05     -0.461      0.645   -7.11e-05     4.4e-05
x7         -1.198e-05   3.25e-06     -3.693      0.000   -1.83e-05   -5.62e-06
x8             0.0027   9.17e-06    293.847      0.000       0.003       0.003
x9         -8.458e-07      0.000     -0.006      0.995      -0.000       0.000
x10            0.0005      0.000      1.213      0.225      -0.000       0.001
x11           -0.0027   4.93e-05    -54.454      0.000      -0.003      -0.003
x12            0.0007   3.53e-05     19.122      0.000       0.001       0.001
x13        -1.207e-05   2.16e-05     -0.559      0.576   -5.44e-05    3.03e-05
x14        -3.571e-05   1.38e-05     -2.581      0.010   -6.28e-05   -8.59e-06
x15        -1.308e-05   2.71e-06     -4.820      0.000   -1.84e-05   -7.76e-06
x16         -1.12e-05   4.71e-05     -0.238      0.812      -0.000    8.11e-05
x17        -1.059e-05   1.48e-05     -0.715      0.474   -3.96e-05    1.84e-05
x18         -2.03e-05   5.97e-05     -0.340      0.734      -0.000    9.68e-05
x19        -1.389e-05   3.69e-05     -0.376      0.707   -8.63e-05    5.85e-05
x20         2.105e-05      0.000      0.107      0.915      -0.000       0.000
ar.L1         -1.1996   4.09e-05  -2.93e+04      0.000      -1.200      -1.200
ar.L2         -0.8995   1.54e-05  -5.82e+04      0.000      -0.900      -0.899
ar.L3         -0.3999   1.46e-05  -2.74e+04      0.000      -0.400      -0.400
sigma2      2.425e-10   7.55e-11      3.213      0.001    9.46e-11     3.9e-10
===================================================================================
Ljung-Box (L1) (Q):                  14.46   Jarque-Bera (JB):           2454147.19
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            -3.95
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.38
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.88e+20. Standard errors may be unstable.
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_41 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_41 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.09855, saving model to LSTM5.h5
48/48 - 3s - loss: 0.3499 - val_loss: 0.0985 - lr: 0.0010 - 3s/epoch - 53ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.09855
48/48 - 1s - loss: 0.1193 - val_loss: 0.9455 - lr: 0.0010 - 605ms/epoch - 13ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.09855
48/48 - 1s - loss: 0.1690 - val_loss: 0.5143 - lr: 0.0010 - 596ms/epoch - 12ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.09855
48/48 - 1s - loss: 0.0709 - val_loss: 0.4796 - lr: 0.0010 - 637ms/epoch - 13ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.09855
48/48 - 1s - loss: 0.0580 - val_loss: 0.1613 - lr: 0.0010 - 629ms/epoch - 13ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.09855 to 0.00964, saving model to LSTM5.h5
48/48 - 1s - loss: 0.0543 - val_loss: 0.0096 - lr: 0.0010 - 656ms/epoch - 14ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00964
48/48 - 1s - loss: 0.0633 - val_loss: 0.0111 - lr: 0.0010 - 622ms/epoch - 13ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.00964 to 0.00781, saving model to LSTM5.h5
48/48 - 1s - loss: 0.0639 - val_loss: 0.0078 - lr: 0.0010 - 613ms/epoch - 13ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0483 - val_loss: 0.1089 - lr: 0.0010 - 584ms/epoch - 12ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0504 - val_loss: 0.0122 - lr: 0.0010 - 584ms/epoch - 12ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0430 - val_loss: 0.0159 - lr: 0.0010 - 585ms/epoch - 12ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0441 - val_loss: 0.0906 - lr: 0.0010 - 661ms/epoch - 14ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00013: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0375 - val_loss: 0.0376 - lr: 0.0010 - 575ms/epoch - 12ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0345 - val_loss: 0.0405 - lr: 1.0000e-04 - 567ms/epoch - 12ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0347 - val_loss: 0.0392 - lr: 1.0000e-04 - 603ms/epoch - 13ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0341 - val_loss: 0.0440 - lr: 1.0000e-04 - 586ms/epoch - 12ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0344 - val_loss: 0.0545 - lr: 1.0000e-04 - 596ms/epoch - 12ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00018: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0344 - val_loss: 0.0616 - lr: 1.0000e-04 - 614ms/epoch - 13ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0326 - val_loss: 0.0625 - lr: 1.0000e-05 - 575ms/epoch - 12ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0308 - val_loss: 0.0623 - lr: 1.0000e-05 - 598ms/epoch - 12ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0324 - val_loss: 0.0638 - lr: 1.0000e-05 - 585ms/epoch - 12ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0339 - val_loss: 0.0651 - lr: 1.0000e-05 - 660ms/epoch - 14ms/step
Epoch 23/500

Epoch 00023: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00023: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0336 - val_loss: 0.0660 - lr: 1.0000e-05 - 653ms/epoch - 14ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0340 - val_loss: 0.0675 - lr: 1.0000e-05 - 610ms/epoch - 13ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0334 - val_loss: 0.0690 - lr: 1.0000e-05 - 586ms/epoch - 12ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0325 - val_loss: 0.0690 - lr: 1.0000e-05 - 616ms/epoch - 13ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0348 - val_loss: 0.0672 - lr: 1.0000e-05 - 622ms/epoch - 13ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0349 - val_loss: 0.0666 - lr: 1.0000e-05 - 604ms/epoch - 13ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0310 - val_loss: 0.0667 - lr: 1.0000e-05 - 632ms/epoch - 13ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0326 - val_loss: 0.0664 - lr: 1.0000e-05 - 625ms/epoch - 13ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0343 - val_loss: 0.0658 - lr: 1.0000e-05 - 573ms/epoch - 12ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0343 - val_loss: 0.0655 - lr: 1.0000e-05 - 606ms/epoch - 13ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0338 - val_loss: 0.0661 - lr: 1.0000e-05 - 569ms/epoch - 12ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0299 - val_loss: 0.0649 - lr: 1.0000e-05 - 578ms/epoch - 12ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0326 - val_loss: 0.0657 - lr: 1.0000e-05 - 597ms/epoch - 12ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0303 - val_loss: 0.0655 - lr: 1.0000e-05 - 583ms/epoch - 12ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0338 - val_loss: 0.0657 - lr: 1.0000e-05 - 578ms/epoch - 12ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0323 - val_loss: 0.0664 - lr: 1.0000e-05 - 669ms/epoch - 14ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0294 - val_loss: 0.0669 - lr: 1.0000e-05 - 608ms/epoch - 13ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0314 - val_loss: 0.0655 - lr: 1.0000e-05 - 586ms/epoch - 12ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0306 - val_loss: 0.0660 - lr: 1.0000e-05 - 603ms/epoch - 13ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0311 - val_loss: 0.0666 - lr: 1.0000e-05 - 585ms/epoch - 12ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0342 - val_loss: 0.0679 - lr: 1.0000e-05 - 591ms/epoch - 12ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0318 - val_loss: 0.0677 - lr: 1.0000e-05 - 605ms/epoch - 13ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0302 - val_loss: 0.0670 - lr: 1.0000e-05 - 558ms/epoch - 12ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0323 - val_loss: 0.0675 - lr: 1.0000e-05 - 591ms/epoch - 12ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0317 - val_loss: 0.0689 - lr: 1.0000e-05 - 562ms/epoch - 12ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0315 - val_loss: 0.0707 - lr: 1.0000e-05 - 598ms/epoch - 12ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0280 - val_loss: 0.0706 - lr: 1.0000e-05 - 636ms/epoch - 13ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0331 - val_loss: 0.0714 - lr: 1.0000e-05 - 567ms/epoch - 12ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0326 - val_loss: 0.0711 - lr: 1.0000e-05 - 604ms/epoch - 13ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0308 - val_loss: 0.0720 - lr: 1.0000e-05 - 621ms/epoch - 13ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0323 - val_loss: 0.0746 - lr: 1.0000e-05 - 628ms/epoch - 13ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0285 - val_loss: 0.0728 - lr: 1.0000e-05 - 618ms/epoch - 13ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0311 - val_loss: 0.0721 - lr: 1.0000e-05 - 569ms/epoch - 12ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0291 - val_loss: 0.0737 - lr: 1.0000e-05 - 580ms/epoch - 12ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0323 - val_loss: 0.0745 - lr: 1.0000e-05 - 583ms/epoch - 12ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00781
48/48 - 1s - loss: 0.0316 - val_loss: 0.0750 - lr: 1.0000e-05 - 636ms/epoch - 13ms/step
Epoch 00058: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 218.84742268028705 
RMSE:	 14.79349257884314 
MAPE:	 12.049823582857737
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17007.807, Time=3.40 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14576.593, Time=5.11 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15585.734, Time=9.61 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14574.593, Time=7.79 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15458.426, Time=11.39 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15621.247, Time=13.82 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17231.605, Time=21.73 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14570.593, Time=10.25 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-16761.093, Time=17.84 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-13173.936, Time=34.33 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 135.300 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8638.803
Date:                Sun, 12 Dec 2021   AIC                         -17231.605
Time:                        20:12:11   BIC                         -17123.716
Sample:                             0   HQIC                        -17190.171
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -5.101e-09   4.36e-05     -0.000      1.000   -8.54e-05    8.54e-05
x2         -5.085e-09   4.35e-05     -0.000      1.000   -8.53e-05    8.53e-05
x3          -5.12e-09   4.36e-05     -0.000      1.000   -8.56e-05    8.55e-05
x4             1.0000   4.36e-05   2.29e+04      0.000       1.000       1.000
x5         -4.635e-09   4.15e-05     -0.000      1.000   -8.14e-05    8.14e-05
x6         -1.766e-08   7.54e-05     -0.000      1.000      -0.000       0.000
x7         -5.054e-09   4.34e-05     -0.000      1.000    -8.5e-05     8.5e-05
x8         -4.941e-09   4.29e-05     -0.000      1.000   -8.41e-05    8.41e-05
x9         -3.138e-10   8.71e-06   -3.6e-05      1.000   -1.71e-05    1.71e-05
x10        -1.002e-09   1.85e-05  -5.41e-05      1.000   -3.63e-05    3.63e-05
x11        -4.879e-09   4.26e-05     -0.000      1.000   -8.36e-05    8.36e-05
x12        -4.991e-09   4.31e-05     -0.000      1.000   -8.46e-05    8.45e-05
x13        -5.099e-09   4.36e-05     -0.000      1.000   -8.54e-05    8.54e-05
x14        -3.925e-08      0.000     -0.000      1.000      -0.000       0.000
x15        -4.597e-09   4.13e-05     -0.000      1.000    -8.1e-05     8.1e-05
x16        -1.164e-08    6.6e-05     -0.000      1.000      -0.000       0.000
x17        -4.702e-09   4.19e-05     -0.000      1.000   -8.22e-05    8.22e-05
x18        -8.297e-10   1.65e-05  -5.02e-05      1.000   -3.24e-05    3.24e-05
x19        -5.725e-09   4.61e-05     -0.000      1.000   -9.04e-05    9.04e-05
x20        -5.511e-09   4.28e-05     -0.000      1.000    -8.4e-05    8.39e-05
ma.L1         -1.3891   1.96e-08  -7.08e+07      0.000      -1.389      -1.389
ma.L2          0.4027   2.02e-08   1.99e+07      0.000       0.403       0.403
sigma2      7.547e-11   6.92e-11      1.091      0.275   -6.01e-11    2.11e-10
===================================================================================
Ljung-Box (L1) (Q):                  67.97   Jarque-Bera (JB):           6306943.47
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            12.31
Prob(H) (two-sided):                  0.00   Kurtosis:                       435.93
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.3e+24. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

WARNING:tensorflow:Layer lstm_42 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_42 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.09288, saving model to LSTM5.h5
16/16 - 2s - loss: 0.6781 - val_loss: 0.0929 - lr: 0.0010 - 2s/epoch - 135ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.09288
16/16 - 0s - loss: 0.1891 - val_loss: 0.3751 - lr: 0.0010 - 241ms/epoch - 15ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.09288
16/16 - 0s - loss: 0.0797 - val_loss: 0.2141 - lr: 0.0010 - 237ms/epoch - 15ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.09288 to 0.00951, saving model to LSTM5.h5
16/16 - 0s - loss: 0.0587 - val_loss: 0.0095 - lr: 0.0010 - 245ms/epoch - 15ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.00951 to 0.00796, saving model to LSTM5.h5
16/16 - 0s - loss: 0.0464 - val_loss: 0.0080 - lr: 0.0010 - 251ms/epoch - 16ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0495 - val_loss: 0.0137 - lr: 0.0010 - 215ms/epoch - 13ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0444 - val_loss: 0.0140 - lr: 0.0010 - 234ms/epoch - 15ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0402 - val_loss: 0.0262 - lr: 0.0010 - 232ms/epoch - 14ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0427 - val_loss: 0.0121 - lr: 0.0010 - 209ms/epoch - 13ms/step
Epoch 10/500

Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00010: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0386 - val_loss: 0.0137 - lr: 0.0010 - 224ms/epoch - 14ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0363 - val_loss: 0.0138 - lr: 1.0000e-04 - 217ms/epoch - 14ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0382 - val_loss: 0.0130 - lr: 1.0000e-04 - 228ms/epoch - 14ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0339 - val_loss: 0.0131 - lr: 1.0000e-04 - 226ms/epoch - 14ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0330 - val_loss: 0.0146 - lr: 1.0000e-04 - 211ms/epoch - 13ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00015: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0373 - val_loss: 0.0151 - lr: 1.0000e-04 - 231ms/epoch - 14ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0345 - val_loss: 0.0150 - lr: 1.0000e-05 - 237ms/epoch - 15ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0367 - val_loss: 0.0150 - lr: 1.0000e-05 - 226ms/epoch - 14ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0337 - val_loss: 0.0149 - lr: 1.0000e-05 - 225ms/epoch - 14ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0358 - val_loss: 0.0148 - lr: 1.0000e-05 - 271ms/epoch - 17ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00020: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0348 - val_loss: 0.0147 - lr: 1.0000e-05 - 220ms/epoch - 14ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0341 - val_loss: 0.0147 - lr: 1.0000e-05 - 222ms/epoch - 14ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0354 - val_loss: 0.0147 - lr: 1.0000e-05 - 220ms/epoch - 14ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0371 - val_loss: 0.0148 - lr: 1.0000e-05 - 217ms/epoch - 14ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0323 - val_loss: 0.0148 - lr: 1.0000e-05 - 228ms/epoch - 14ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0330 - val_loss: 0.0149 - lr: 1.0000e-05 - 225ms/epoch - 14ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0381 - val_loss: 0.0150 - lr: 1.0000e-05 - 209ms/epoch - 13ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0333 - val_loss: 0.0151 - lr: 1.0000e-05 - 219ms/epoch - 14ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0380 - val_loss: 0.0152 - lr: 1.0000e-05 - 234ms/epoch - 15ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0344 - val_loss: 0.0151 - lr: 1.0000e-05 - 229ms/epoch - 14ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0347 - val_loss: 0.0149 - lr: 1.0000e-05 - 223ms/epoch - 14ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0367 - val_loss: 0.0149 - lr: 1.0000e-05 - 210ms/epoch - 13ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0336 - val_loss: 0.0150 - lr: 1.0000e-05 - 231ms/epoch - 14ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0357 - val_loss: 0.0152 - lr: 1.0000e-05 - 254ms/epoch - 16ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0352 - val_loss: 0.0152 - lr: 1.0000e-05 - 230ms/epoch - 14ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0353 - val_loss: 0.0154 - lr: 1.0000e-05 - 230ms/epoch - 14ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0352 - val_loss: 0.0155 - lr: 1.0000e-05 - 222ms/epoch - 14ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0348 - val_loss: 0.0153 - lr: 1.0000e-05 - 238ms/epoch - 15ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0355 - val_loss: 0.0151 - lr: 1.0000e-05 - 234ms/epoch - 15ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0345 - val_loss: 0.0149 - lr: 1.0000e-05 - 224ms/epoch - 14ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0356 - val_loss: 0.0150 - lr: 1.0000e-05 - 209ms/epoch - 13ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0353 - val_loss: 0.0149 - lr: 1.0000e-05 - 205ms/epoch - 13ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0351 - val_loss: 0.0149 - lr: 1.0000e-05 - 228ms/epoch - 14ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0365 - val_loss: 0.0149 - lr: 1.0000e-05 - 253ms/epoch - 16ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0325 - val_loss: 0.0149 - lr: 1.0000e-05 - 247ms/epoch - 15ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0340 - val_loss: 0.0151 - lr: 1.0000e-05 - 246ms/epoch - 15ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0324 - val_loss: 0.0146 - lr: 1.0000e-05 - 244ms/epoch - 15ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0352 - val_loss: 0.0143 - lr: 1.0000e-05 - 227ms/epoch - 14ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0327 - val_loss: 0.0141 - lr: 1.0000e-05 - 219ms/epoch - 14ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0344 - val_loss: 0.0140 - lr: 1.0000e-05 - 219ms/epoch - 14ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0354 - val_loss: 0.0141 - lr: 1.0000e-05 - 238ms/epoch - 15ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0351 - val_loss: 0.0139 - lr: 1.0000e-05 - 239ms/epoch - 15ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0337 - val_loss: 0.0140 - lr: 1.0000e-05 - 208ms/epoch - 13ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0347 - val_loss: 0.0140 - lr: 1.0000e-05 - 245ms/epoch - 15ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0342 - val_loss: 0.0144 - lr: 1.0000e-05 - 231ms/epoch - 14ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00796
16/16 - 0s - loss: 0.0325 - val_loss: 0.0142 - lr: 1.0000e-05 - 227ms/epoch - 14ms/step
Epoch 00055: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 218.84742268028705 
RMSE:	 14.79349257884314 
MAPE:	 12.049823582857737

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 55.80809642994332 
RMSE:	 7.470481673221836 
MAPE:	 6.155377787606487
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-15462.744, Time=15.09 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-13144.103, Time=2.97 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16179.868, Time=7.21 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14670.350, Time=14.74 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-15643.233, Time=21.32 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15673.437, Time=18.04 sec
 ARIMA(1,3,0)(0,0,0)[0] intercept   : AIC=-15494.535, Time=8.22 sec

Best model:  ARIMA(1,3,0)(0,0,0)[0]          
Total fit time: 87.614 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(1, 3, 0)   Log Likelihood                8111.934
Date:                Sun, 12 Dec 2021   AIC                         -16179.868
Time:                        20:19:35   BIC                         -16076.670
Sample:                             0   HQIC                        -16140.236
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -1.474e-05      0.000     -0.048      0.961      -0.001       0.001
x2         -1.471e-05      0.000     -0.041      0.967      -0.001       0.001
x3         -1.475e-05      0.000     -0.072      0.943      -0.000       0.000
x4             1.0000      0.000   3644.383      0.000       0.999       1.001
x5         -1.405e-05      0.000     -0.051      0.960      -0.001       0.001
x6         -2.487e-05   4.39e-05     -0.567      0.571      -0.000    6.11e-05
x7         -1.467e-05      0.000     -0.134      0.893      -0.000       0.000
x8             0.0004      0.000      3.240      0.001       0.000       0.001
x9          3.739e-06      0.001      0.003      0.998      -0.003       0.003
x10           -0.0006      0.001     -0.447      0.655      -0.003       0.002
x11            0.0024   2.31e-05    105.301      0.000       0.002       0.002
x12           -0.0019      0.000     -7.274      0.000      -0.002      -0.001
x13        -1.473e-05      0.000     -0.113      0.910      -0.000       0.000
x14        -4.124e-05      0.000     -0.135      0.893      -0.001       0.001
x15        -1.347e-05      0.000     -0.095      0.924      -0.000       0.000
x16        -2.422e-05      0.000     -0.100      0.920      -0.000       0.000
x17        -1.471e-05      0.000     -0.112      0.911      -0.000       0.000
x18         2.884e-06      0.000      0.006      0.995      -0.001       0.001
x19        -1.493e-05      0.000     -0.105      0.916      -0.000       0.000
x20         3.469e-06      0.000      0.007      0.994      -0.001       0.001
ar.L1         -0.6665   6.84e-05  -9743.045      0.000      -0.667      -0.666
sigma2      1.498e-10   7.34e-11      2.042      0.041    6.03e-12    2.94e-10
===================================================================================
Ljung-Box (L1) (Q):                  89.34   Jarque-Bera (JB):           3270298.31
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.18
Prob(H) (two-sided):                  0.00   Kurtosis:                       315.08
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.61e+19. Standard errors may be unstable.
ARIMA order: (1, 3, 0) 

WARNING:tensorflow:Layer lstm_43 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_43 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.02747, saving model to LSTM5.h5
17/17 - 2s - loss: 0.6197 - val_loss: 0.0275 - lr: 0.0010 - 2s/epoch - 127ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.1453 - val_loss: 0.1472 - lr: 0.0010 - 231ms/epoch - 14ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0795 - val_loss: 0.5646 - lr: 0.0010 - 240ms/epoch - 14ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0568 - val_loss: 0.0331 - lr: 0.0010 - 230ms/epoch - 14ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0417 - val_loss: 0.2752 - lr: 0.0010 - 267ms/epoch - 16ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0361 - val_loss: 0.1259 - lr: 0.0010 - 235ms/epoch - 14ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0341 - val_loss: 0.1153 - lr: 1.0000e-04 - 258ms/epoch - 15ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0337 - val_loss: 0.1065 - lr: 1.0000e-04 - 217ms/epoch - 13ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0339 - val_loss: 0.0996 - lr: 1.0000e-04 - 245ms/epoch - 14ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0293 - val_loss: 0.0927 - lr: 1.0000e-04 - 232ms/epoch - 14ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0352 - val_loss: 0.0866 - lr: 1.0000e-04 - 255ms/epoch - 15ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0302 - val_loss: 0.0859 - lr: 1.0000e-05 - 238ms/epoch - 14ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0325 - val_loss: 0.0850 - lr: 1.0000e-05 - 256ms/epoch - 15ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0325 - val_loss: 0.0842 - lr: 1.0000e-05 - 237ms/epoch - 14ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0340 - val_loss: 0.0835 - lr: 1.0000e-05 - 235ms/epoch - 14ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0314 - val_loss: 0.0829 - lr: 1.0000e-05 - 251ms/epoch - 15ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0303 - val_loss: 0.0822 - lr: 1.0000e-05 - 249ms/epoch - 15ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0331 - val_loss: 0.0817 - lr: 1.0000e-05 - 243ms/epoch - 14ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0308 - val_loss: 0.0812 - lr: 1.0000e-05 - 242ms/epoch - 14ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0319 - val_loss: 0.0805 - lr: 1.0000e-05 - 243ms/epoch - 14ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0330 - val_loss: 0.0801 - lr: 1.0000e-05 - 234ms/epoch - 14ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0298 - val_loss: 0.0801 - lr: 1.0000e-05 - 227ms/epoch - 13ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0353 - val_loss: 0.0797 - lr: 1.0000e-05 - 233ms/epoch - 14ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0330 - val_loss: 0.0791 - lr: 1.0000e-05 - 233ms/epoch - 14ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0363 - val_loss: 0.0785 - lr: 1.0000e-05 - 245ms/epoch - 14ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0294 - val_loss: 0.0780 - lr: 1.0000e-05 - 234ms/epoch - 14ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0287 - val_loss: 0.0778 - lr: 1.0000e-05 - 238ms/epoch - 14ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0307 - val_loss: 0.0772 - lr: 1.0000e-05 - 266ms/epoch - 16ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0311 - val_loss: 0.0766 - lr: 1.0000e-05 - 238ms/epoch - 14ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0278 - val_loss: 0.0765 - lr: 1.0000e-05 - 256ms/epoch - 15ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0321 - val_loss: 0.0758 - lr: 1.0000e-05 - 254ms/epoch - 15ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0307 - val_loss: 0.0751 - lr: 1.0000e-05 - 257ms/epoch - 15ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0288 - val_loss: 0.0743 - lr: 1.0000e-05 - 251ms/epoch - 15ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0312 - val_loss: 0.0743 - lr: 1.0000e-05 - 234ms/epoch - 14ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0327 - val_loss: 0.0735 - lr: 1.0000e-05 - 229ms/epoch - 13ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0319 - val_loss: 0.0728 - lr: 1.0000e-05 - 251ms/epoch - 15ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0305 - val_loss: 0.0720 - lr: 1.0000e-05 - 234ms/epoch - 14ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0318 - val_loss: 0.0709 - lr: 1.0000e-05 - 240ms/epoch - 14ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0315 - val_loss: 0.0711 - lr: 1.0000e-05 - 225ms/epoch - 13ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0304 - val_loss: 0.0709 - lr: 1.0000e-05 - 231ms/epoch - 14ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0309 - val_loss: 0.0704 - lr: 1.0000e-05 - 236ms/epoch - 14ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0313 - val_loss: 0.0701 - lr: 1.0000e-05 - 232ms/epoch - 14ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0328 - val_loss: 0.0698 - lr: 1.0000e-05 - 219ms/epoch - 13ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0292 - val_loss: 0.0689 - lr: 1.0000e-05 - 242ms/epoch - 14ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0337 - val_loss: 0.0687 - lr: 1.0000e-05 - 239ms/epoch - 14ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0293 - val_loss: 0.0682 - lr: 1.0000e-05 - 229ms/epoch - 13ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0348 - val_loss: 0.0672 - lr: 1.0000e-05 - 230ms/epoch - 14ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0301 - val_loss: 0.0665 - lr: 1.0000e-05 - 231ms/epoch - 14ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0281 - val_loss: 0.0659 - lr: 1.0000e-05 - 261ms/epoch - 15ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0341 - val_loss: 0.0653 - lr: 1.0000e-05 - 244ms/epoch - 14ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.02747
17/17 - 0s - loss: 0.0313 - val_loss: 0.0654 - lr: 1.0000e-05 - 227ms/epoch - 13ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 218.84742268028705 
RMSE:	 14.79349257884314 
MAPE:	 12.049823582857737

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 55.80809642994332 
RMSE:	 7.470481673221836 
MAPE:	 6.155377787606487

WMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 29.732679793069057 
RMSE:	 5.452768085391956 
MAPE:	 4.481765047502752
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17007.773, Time=3.46 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14576.593, Time=5.09 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16293.727, Time=8.60 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14574.593, Time=7.55 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16647.994, Time=11.22 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15621.952, Time=11.66 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16876.201, Time=12.42 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17032.019, Time=6.43 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17006.612, Time=3.58 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17089.440, Time=8.44 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=17.40 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17005.977, Time=3.84 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-17000.665, Time=4.75 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 104.458 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.720
Date:                Sun, 12 Dec 2021   AIC                         -17089.440
Time:                        20:22:27   BIC                         -16972.169
Sample:                             0   HQIC                        -17044.403
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.799e-10   1.36e-20  -2.06e+10      0.000    -2.8e-10    -2.8e-10
x2         -2.816e-10   1.37e-20  -2.06e+10      0.000   -2.82e-10   -2.82e-10
x3         -2.804e-10   1.36e-20  -2.06e+10      0.000    -2.8e-10    -2.8e-10
x4             1.0000   1.36e-20   7.33e+19      0.000       1.000       1.000
x5         -2.598e-10   1.31e-20  -1.98e+10      0.000    -2.6e-10    -2.6e-10
x6         -1.388e-09   2.97e-20  -4.67e+10      0.000   -1.39e-09   -1.39e-09
x7         -2.788e-10   1.36e-20  -2.05e+10      0.000   -2.79e-10   -2.79e-10
x8         -2.761e-10   1.35e-20  -2.04e+10      0.000   -2.76e-10   -2.76e-10
x9          -2.22e-12   3.36e-22  -6.61e+09      0.000   -2.22e-12   -2.22e-12
x10        -1.345e-10   9.36e-21  -1.44e+10      0.000   -1.34e-10   -1.34e-10
x11        -2.898e-10   1.38e-20  -2.09e+10      0.000    -2.9e-10    -2.9e-10
x12        -2.602e-10   1.31e-20  -1.98e+10      0.000    -2.6e-10    -2.6e-10
x13        -2.807e-10   1.36e-20  -2.06e+10      0.000   -2.81e-10   -2.81e-10
x14         -1.87e-09   3.52e-20  -5.31e+10      0.000   -1.87e-09   -1.87e-09
x15        -2.767e-10   1.37e-20  -2.03e+10      0.000   -2.77e-10   -2.77e-10
x16        -8.184e-11   7.33e-21  -1.12e+10      0.000   -8.18e-11   -8.18e-11
x17        -2.407e-10   1.27e-20   -1.9e+10      0.000   -2.41e-10   -2.41e-10
x18        -6.412e-10   2.06e-20  -3.11e+10      0.000   -6.41e-10   -6.41e-10
x19        -2.915e-10   1.39e-20   -2.1e+10      0.000   -2.92e-10   -2.92e-10
x20        -4.337e-10   1.69e-20  -2.56e+10      0.000   -4.34e-10   -4.34e-10
ar.L1         -0.4924   1.46e-22  -3.38e+21      0.000      -0.492      -0.492
ar.L2         -0.1923   8.47e-23  -2.27e+21      0.000      -0.192      -0.192
ar.L3         -0.0461   4.02e-23  -1.15e+21      0.000      -0.046      -0.046
ma.L1         -0.7078   3.31e-22  -2.14e+21      0.000      -0.708      -0.708
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  55.12   Jarque-Bera (JB):           4171061.36
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.27
Prob(H) (two-sided):                  0.00   Kurtosis:                       355.48
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.88e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

WARNING:tensorflow:Layer lstm_44 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_44 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.23110, saving model to LSTM5.h5
10/10 - 2s - loss: 0.2732 - val_loss: 0.2311 - lr: 0.0010 - 2s/epoch - 205ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.23110
10/10 - 0s - loss: 0.1170 - val_loss: 0.5030 - lr: 0.0010 - 151ms/epoch - 15ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.23110 to 0.07328, saving model to LSTM5.h5
10/10 - 0s - loss: 0.1450 - val_loss: 0.0733 - lr: 0.0010 - 178ms/epoch - 18ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.07328 to 0.03480, saving model to LSTM5.h5
10/10 - 0s - loss: 0.1125 - val_loss: 0.0348 - lr: 0.0010 - 208ms/epoch - 21ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.03480 to 0.03006, saving model to LSTM5.h5
10/10 - 0s - loss: 0.1034 - val_loss: 0.0301 - lr: 0.0010 - 183ms/epoch - 18ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.03006 to 0.01055, saving model to LSTM5.h5
10/10 - 0s - loss: 0.0709 - val_loss: 0.0105 - lr: 0.0010 - 213ms/epoch - 21ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.01055 to 0.00826, saving model to LSTM5.h5
10/10 - 0s - loss: 0.0523 - val_loss: 0.0083 - lr: 0.0010 - 188ms/epoch - 19ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0484 - val_loss: 0.0763 - lr: 0.0010 - 145ms/epoch - 15ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0436 - val_loss: 0.0342 - lr: 0.0010 - 135ms/epoch - 14ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0484 - val_loss: 0.0091 - lr: 0.0010 - 183ms/epoch - 18ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0424 - val_loss: 0.0102 - lr: 0.0010 - 160ms/epoch - 16ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00012: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0471 - val_loss: 0.0130 - lr: 0.0010 - 157ms/epoch - 16ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0453 - val_loss: 0.0132 - lr: 1.0000e-04 - 169ms/epoch - 17ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0402 - val_loss: 0.0130 - lr: 1.0000e-04 - 162ms/epoch - 16ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0398 - val_loss: 0.0128 - lr: 1.0000e-04 - 151ms/epoch - 15ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0397 - val_loss: 0.0128 - lr: 1.0000e-04 - 180ms/epoch - 18ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00017: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0410 - val_loss: 0.0130 - lr: 1.0000e-04 - 178ms/epoch - 18ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0337 - val_loss: 0.0130 - lr: 1.0000e-05 - 146ms/epoch - 15ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0395 - val_loss: 0.0130 - lr: 1.0000e-05 - 169ms/epoch - 17ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0362 - val_loss: 0.0130 - lr: 1.0000e-05 - 151ms/epoch - 15ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0375 - val_loss: 0.0130 - lr: 1.0000e-05 - 163ms/epoch - 16ms/step
Epoch 22/500

Epoch 00022: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00022: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0381 - val_loss: 0.0130 - lr: 1.0000e-05 - 157ms/epoch - 16ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0410 - val_loss: 0.0131 - lr: 1.0000e-05 - 151ms/epoch - 15ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0382 - val_loss: 0.0131 - lr: 1.0000e-05 - 167ms/epoch - 17ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0366 - val_loss: 0.0131 - lr: 1.0000e-05 - 156ms/epoch - 16ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0369 - val_loss: 0.0130 - lr: 1.0000e-05 - 140ms/epoch - 14ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0396 - val_loss: 0.0130 - lr: 1.0000e-05 - 173ms/epoch - 17ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0392 - val_loss: 0.0130 - lr: 1.0000e-05 - 182ms/epoch - 18ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0389 - val_loss: 0.0129 - lr: 1.0000e-05 - 154ms/epoch - 15ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0368 - val_loss: 0.0130 - lr: 1.0000e-05 - 182ms/epoch - 18ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0400 - val_loss: 0.0132 - lr: 1.0000e-05 - 145ms/epoch - 15ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0365 - val_loss: 0.0132 - lr: 1.0000e-05 - 169ms/epoch - 17ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0411 - val_loss: 0.0133 - lr: 1.0000e-05 - 164ms/epoch - 16ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0371 - val_loss: 0.0134 - lr: 1.0000e-05 - 150ms/epoch - 15ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0400 - val_loss: 0.0134 - lr: 1.0000e-05 - 169ms/epoch - 17ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0348 - val_loss: 0.0134 - lr: 1.0000e-05 - 164ms/epoch - 16ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0358 - val_loss: 0.0134 - lr: 1.0000e-05 - 157ms/epoch - 16ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0371 - val_loss: 0.0133 - lr: 1.0000e-05 - 169ms/epoch - 17ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0385 - val_loss: 0.0133 - lr: 1.0000e-05 - 153ms/epoch - 15ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0364 - val_loss: 0.0132 - lr: 1.0000e-05 - 155ms/epoch - 16ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0355 - val_loss: 0.0133 - lr: 1.0000e-05 - 151ms/epoch - 15ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0373 - val_loss: 0.0134 - lr: 1.0000e-05 - 157ms/epoch - 16ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0369 - val_loss: 0.0134 - lr: 1.0000e-05 - 165ms/epoch - 16ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0353 - val_loss: 0.0135 - lr: 1.0000e-05 - 152ms/epoch - 15ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0338 - val_loss: 0.0136 - lr: 1.0000e-05 - 152ms/epoch - 15ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0392 - val_loss: 0.0138 - lr: 1.0000e-05 - 178ms/epoch - 18ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0405 - val_loss: 0.0140 - lr: 1.0000e-05 - 173ms/epoch - 17ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0371 - val_loss: 0.0142 - lr: 1.0000e-05 - 164ms/epoch - 16ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0368 - val_loss: 0.0144 - lr: 1.0000e-05 - 162ms/epoch - 16ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0370 - val_loss: 0.0146 - lr: 1.0000e-05 - 157ms/epoch - 16ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0353 - val_loss: 0.0147 - lr: 1.0000e-05 - 146ms/epoch - 15ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0353 - val_loss: 0.0148 - lr: 1.0000e-05 - 139ms/epoch - 14ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0395 - val_loss: 0.0149 - lr: 1.0000e-05 - 177ms/epoch - 18ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0349 - val_loss: 0.0148 - lr: 1.0000e-05 - 157ms/epoch - 16ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0368 - val_loss: 0.0148 - lr: 1.0000e-05 - 156ms/epoch - 16ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0374 - val_loss: 0.0147 - lr: 1.0000e-05 - 147ms/epoch - 15ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00826
10/10 - 0s - loss: 0.0358 - val_loss: 0.0147 - lr: 1.0000e-05 - 149ms/epoch - 15ms/step
Epoch 00057: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 218.84742268028705 
RMSE:	 14.79349257884314 
MAPE:	 12.049823582857737

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 55.80809642994332 
RMSE:	 7.470481673221836 
MAPE:	 6.155377787606487

WMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 29.732679793069057 
RMSE:	 5.452768085391956 
MAPE:	 4.481765047502752

DEMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	44.03% Accuracy
MSE:	 217.77293515692304 
RMSE:	 14.757131671057321 
MAPE:	 12.98122268289883
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17007.733, Time=3.41 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14576.593, Time=5.14 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16469.294, Time=9.34 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14574.593, Time=8.04 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16346.513, Time=10.22 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16569.862, Time=12.37 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16356.870, Time=18.21 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17033.457, Time=6.86 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17006.582, Time=3.80 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17089.434, Time=7.50 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=-15789.397, Time=14.29 sec
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-15386.395, Time=25.23 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=47.433, Time=7.44 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 131.882 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.717
Date:                Sun, 12 Dec 2021   AIC                         -17089.434
Time:                        20:36:20   BIC                         -16972.163
Sample:                             0   HQIC                        -17044.397
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.222e-10   9.26e-21   -2.4e+10      0.000   -2.22e-10   -2.22e-10
x2         -2.175e-10   9.18e-21  -2.37e+10      0.000   -2.18e-10   -2.18e-10
x3         -2.088e-10   8.98e-21  -2.33e+10      0.000   -2.09e-10   -2.09e-10
x4             1.0000   9.08e-21    1.1e+20      0.000       1.000       1.000
x5         -1.927e-10   8.64e-21  -2.23e+10      0.000   -1.93e-10   -1.93e-10
x6          -1.33e-09   2.17e-20  -6.14e+10      0.000   -1.33e-09   -1.33e-09
x7         -2.053e-10   8.93e-21   -2.3e+10      0.000   -2.05e-10   -2.05e-10
x8         -1.999e-10   8.84e-21  -2.26e+10      0.000      -2e-10      -2e-10
x9           -3.6e-11   1.09e-21  -3.29e+10      0.000    -3.6e-11    -3.6e-11
x10        -9.188e-11   3.87e-21  -2.37e+10      0.000   -9.19e-11   -9.19e-11
x11        -2.014e-10   8.86e-21  -2.27e+10      0.000   -2.01e-10   -2.01e-10
x12        -1.994e-10   8.77e-21  -2.27e+10      0.000   -1.99e-10   -1.99e-10
x13        -2.115e-10   9.05e-21  -2.34e+10      0.000   -2.12e-10   -2.12e-10
x14        -1.723e-09    2.6e-20  -6.63e+10      0.000   -1.72e-09   -1.72e-09
x15        -2.116e-10    9.1e-21  -2.33e+10      0.000   -2.12e-10   -2.12e-10
x16        -3.169e-10   1.11e-20  -2.85e+10      0.000   -3.17e-10   -3.17e-10
x17        -1.804e-10    8.4e-21  -2.15e+10      0.000    -1.8e-10    -1.8e-10
x18        -1.463e-10   7.54e-21  -1.94e+10      0.000   -1.46e-10   -1.46e-10
x19        -2.598e-10   1.01e-20  -2.58e+10      0.000    -2.6e-10    -2.6e-10
x20        -3.922e-10   1.24e-20  -3.18e+10      0.000   -3.92e-10   -3.92e-10
ar.L1         -0.4926   1.44e-22  -3.42e+21      0.000      -0.493      -0.493
ar.L2         -0.1937    8.6e-23  -2.25e+21      0.000      -0.194      -0.194
ar.L3         -0.0441   3.86e-23  -1.14e+21      0.000      -0.044      -0.044
ma.L1         -0.7085    3.3e-22  -2.15e+21      0.000      -0.709      -0.709
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  57.24   Jarque-Bera (JB):           3956070.89
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                             5.16
Prob(H) (two-sided):                  0.00   Kurtosis:                       346.28
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 5.5e+39. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

WARNING:tensorflow:Layer lstm_45 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_45 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05525, saving model to LSTM5.h5
45/45 - 2s - loss: 0.3542 - val_loss: 0.0553 - lr: 0.0010 - 2s/epoch - 55ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.05525
45/45 - 1s - loss: 0.1076 - val_loss: 0.5800 - lr: 0.0010 - 594ms/epoch - 13ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.05525
45/45 - 1s - loss: 0.0674 - val_loss: 0.3613 - lr: 0.0010 - 524ms/epoch - 12ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.05525
45/45 - 1s - loss: 0.0581 - val_loss: 0.0805 - lr: 0.0010 - 598ms/epoch - 13ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.05525 to 0.00993, saving model to LSTM5.h5
45/45 - 1s - loss: 0.0501 - val_loss: 0.0099 - lr: 0.0010 - 587ms/epoch - 13ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0469 - val_loss: 0.0373 - lr: 0.0010 - 554ms/epoch - 12ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0388 - val_loss: 0.0336 - lr: 0.0010 - 532ms/epoch - 12ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0418 - val_loss: 0.0205 - lr: 0.0010 - 593ms/epoch - 13ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0325 - val_loss: 0.0326 - lr: 0.0010 - 586ms/epoch - 13ms/step
Epoch 10/500

Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00010: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0355 - val_loss: 0.0294 - lr: 0.0010 - 547ms/epoch - 12ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0382 - val_loss: 0.0289 - lr: 1.0000e-04 - 561ms/epoch - 12ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0337 - val_loss: 0.0274 - lr: 1.0000e-04 - 555ms/epoch - 12ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0326 - val_loss: 0.0267 - lr: 1.0000e-04 - 563ms/epoch - 13ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0306 - val_loss: 0.0242 - lr: 1.0000e-04 - 594ms/epoch - 13ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00015: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0321 - val_loss: 0.0248 - lr: 1.0000e-04 - 545ms/epoch - 12ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0318 - val_loss: 0.0242 - lr: 1.0000e-05 - 606ms/epoch - 13ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0336 - val_loss: 0.0242 - lr: 1.0000e-05 - 581ms/epoch - 13ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0338 - val_loss: 0.0240 - lr: 1.0000e-05 - 540ms/epoch - 12ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0306 - val_loss: 0.0239 - lr: 1.0000e-05 - 540ms/epoch - 12ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00020: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0335 - val_loss: 0.0231 - lr: 1.0000e-05 - 588ms/epoch - 13ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0320 - val_loss: 0.0228 - lr: 1.0000e-05 - 559ms/epoch - 12ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0324 - val_loss: 0.0223 - lr: 1.0000e-05 - 547ms/epoch - 12ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0334 - val_loss: 0.0223 - lr: 1.0000e-05 - 591ms/epoch - 13ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0319 - val_loss: 0.0221 - lr: 1.0000e-05 - 597ms/epoch - 13ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0356 - val_loss: 0.0216 - lr: 1.0000e-05 - 576ms/epoch - 13ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0256 - val_loss: 0.0213 - lr: 1.0000e-05 - 568ms/epoch - 13ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0306 - val_loss: 0.0210 - lr: 1.0000e-05 - 582ms/epoch - 13ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0323 - val_loss: 0.0210 - lr: 1.0000e-05 - 542ms/epoch - 12ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0308 - val_loss: 0.0209 - lr: 1.0000e-05 - 559ms/epoch - 12ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0314 - val_loss: 0.0207 - lr: 1.0000e-05 - 553ms/epoch - 12ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0327 - val_loss: 0.0206 - lr: 1.0000e-05 - 538ms/epoch - 12ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0325 - val_loss: 0.0208 - lr: 1.0000e-05 - 549ms/epoch - 12ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0317 - val_loss: 0.0205 - lr: 1.0000e-05 - 558ms/epoch - 12ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0305 - val_loss: 0.0207 - lr: 1.0000e-05 - 591ms/epoch - 13ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0324 - val_loss: 0.0210 - lr: 1.0000e-05 - 535ms/epoch - 12ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0310 - val_loss: 0.0201 - lr: 1.0000e-05 - 590ms/epoch - 13ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0302 - val_loss: 0.0197 - lr: 1.0000e-05 - 576ms/epoch - 13ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0297 - val_loss: 0.0195 - lr: 1.0000e-05 - 551ms/epoch - 12ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0337 - val_loss: 0.0190 - lr: 1.0000e-05 - 607ms/epoch - 13ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0333 - val_loss: 0.0193 - lr: 1.0000e-05 - 588ms/epoch - 13ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0308 - val_loss: 0.0189 - lr: 1.0000e-05 - 580ms/epoch - 13ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0296 - val_loss: 0.0194 - lr: 1.0000e-05 - 548ms/epoch - 12ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0307 - val_loss: 0.0195 - lr: 1.0000e-05 - 549ms/epoch - 12ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0328 - val_loss: 0.0194 - lr: 1.0000e-05 - 578ms/epoch - 13ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0291 - val_loss: 0.0192 - lr: 1.0000e-05 - 556ms/epoch - 12ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0293 - val_loss: 0.0193 - lr: 1.0000e-05 - 627ms/epoch - 14ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0296 - val_loss: 0.0190 - lr: 1.0000e-05 - 560ms/epoch - 12ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0278 - val_loss: 0.0189 - lr: 1.0000e-05 - 539ms/epoch - 12ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0297 - val_loss: 0.0197 - lr: 1.0000e-05 - 562ms/epoch - 12ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0335 - val_loss: 0.0193 - lr: 1.0000e-05 - 568ms/epoch - 13ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0303 - val_loss: 0.0191 - lr: 1.0000e-05 - 592ms/epoch - 13ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0315 - val_loss: 0.0190 - lr: 1.0000e-05 - 555ms/epoch - 12ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0292 - val_loss: 0.0191 - lr: 1.0000e-05 - 597ms/epoch - 13ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0286 - val_loss: 0.0188 - lr: 1.0000e-05 - 563ms/epoch - 13ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00993
45/45 - 1s - loss: 0.0287 - val_loss: 0.0185 - lr: 1.0000e-05 - 546ms/epoch - 12ms/step
Epoch 00055: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 218.84742268028705 
RMSE:	 14.79349257884314 
MAPE:	 12.049823582857737

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 55.80809642994332 
RMSE:	 7.470481673221836 
MAPE:	 6.155377787606487

WMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 29.732679793069057 
RMSE:	 5.452768085391956 
MAPE:	 4.481765047502752

DEMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	44.03% Accuracy
MSE:	 217.77293515692304 
RMSE:	 14.757131671057321 
MAPE:	 12.98122268289883

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	43.28% Accuracy
MSE:	 41.92542421552212 
RMSE:	 6.474984495388551 
MAPE:	 5.290774580380154
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.792, Time=3.59 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14576.592, Time=5.19 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16618.742, Time=8.57 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14574.592, Time=7.56 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-17004.301, Time=3.93 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15715.779, Time=23.16 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=inf, Time=3.75 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17007.442, Time=4.01 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17188.392, Time=16.90 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17002.377, Time=4.17 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=-16356.269, Time=15.45 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 96.299 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood                8618.196
Date:                Sun, 12 Dec 2021   AIC                         -17188.392
Time:                        20:49:41   BIC                         -17075.812
Sample:                             0   HQIC                        -17145.157
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -3.582e-10   2.18e-20  -1.64e+10      0.000   -3.58e-10   -3.58e-10
x2         -3.575e-10   2.25e-20  -1.59e+10      0.000   -3.57e-10   -3.57e-10
x3         -3.653e-10   2.09e-20  -1.75e+10      0.000   -3.65e-10   -3.65e-10
x4             1.0000   2.18e-20   4.59e+19      0.000       1.000       1.000
x5         -3.252e-10   2.07e-20  -1.57e+10      0.000   -3.25e-10   -3.25e-10
x6         -7.157e-09   1.78e-19  -4.03e+10      0.000   -7.16e-09   -7.16e-09
x7          -3.29e-10   2.09e-20  -1.58e+10      0.000   -3.29e-10   -3.29e-10
x8          -3.28e-10   2.12e-20  -1.54e+10      0.000   -3.28e-10   -3.28e-10
x9         -1.775e-10   1.29e-21  -1.37e+11      0.000   -1.77e-10   -1.77e-10
x10         -2.94e-10    5.5e-21  -5.34e+10      0.000   -2.94e-10   -2.94e-10
x11        -3.247e-10   2.11e-20  -1.54e+10      0.000   -3.25e-10   -3.25e-10
x12        -3.357e-10   2.11e-20  -1.59e+10      0.000   -3.36e-10   -3.36e-10
x13         -3.46e-10   2.14e-20  -1.62e+10      0.000   -3.46e-10   -3.46e-10
x14        -2.825e-09   6.25e-20  -4.52e+10      0.000   -2.82e-09   -2.82e-09
x15        -3.957e-10   2.33e-20  -1.69e+10      0.000   -3.96e-10   -3.96e-10
x16        -2.548e-10   1.87e-20  -1.36e+10      0.000   -2.55e-10   -2.55e-10
x17        -2.495e-10   1.85e-20  -1.35e+10      0.000   -2.49e-10   -2.49e-10
x18        -1.073e-09   3.84e-20  -2.79e+10      0.000   -1.07e-09   -1.07e-09
x19        -4.343e-10   2.45e-20  -1.78e+10      0.000   -4.34e-10   -4.34e-10
x20        -1.047e-09   3.78e-20  -2.77e+10      0.000   -1.05e-09   -1.05e-09
ar.L1         -1.2157   8.99e-23  -1.35e+22      0.000      -1.216      -1.216
ar.L2         -0.9187   9.81e-23  -9.36e+21      0.000      -0.919      -0.919
ar.L3         -0.4095   9.98e-23   -4.1e+21      0.000      -0.409      -0.409
sigma2      7.969e-11   6.92e-11      1.151      0.250    -5.6e-11    2.15e-10
===================================================================================
Ljung-Box (L1) (Q):                   2.47   Jarque-Bera (JB):             15463.35
Prob(Q):                              0.12   Prob(JB):                         0.00
Heteroskedasticity (H):               0.35   Skew:                             0.62
Prob(H) (two-sided):                  0.00   Kurtosis:                        24.44
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.74e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_46 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_46 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.03218, saving model to LSTM5.h5
58/58 - 3s - loss: 0.1718 - val_loss: 0.0322 - lr: 0.0010 - 3s/epoch - 45ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.03218 to 0.02423, saving model to LSTM5.h5
58/58 - 1s - loss: 0.1881 - val_loss: 0.0242 - lr: 0.0010 - 750ms/epoch - 13ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0780 - val_loss: 1.0303 - lr: 0.0010 - 740ms/epoch - 13ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0542 - val_loss: 0.1516 - lr: 0.0010 - 709ms/epoch - 12ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0474 - val_loss: 0.2665 - lr: 0.0010 - 750ms/epoch - 13ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0477 - val_loss: 0.1055 - lr: 0.0010 - 729ms/epoch - 13ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0388 - val_loss: 0.0368 - lr: 0.0010 - 745ms/epoch - 13ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0426 - val_loss: 0.0245 - lr: 1.0000e-04 - 720ms/epoch - 12ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0369 - val_loss: 0.0282 - lr: 1.0000e-04 - 793ms/epoch - 14ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0356 - val_loss: 0.0256 - lr: 1.0000e-04 - 708ms/epoch - 12ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0350 - val_loss: 0.0338 - lr: 1.0000e-04 - 747ms/epoch - 13ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0359 - val_loss: 0.0285 - lr: 1.0000e-04 - 700ms/epoch - 12ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0347 - val_loss: 0.0280 - lr: 1.0000e-05 - 690ms/epoch - 12ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0335 - val_loss: 0.0281 - lr: 1.0000e-05 - 691ms/epoch - 12ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0386 - val_loss: 0.0276 - lr: 1.0000e-05 - 684ms/epoch - 12ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0343 - val_loss: 0.0279 - lr: 1.0000e-05 - 699ms/epoch - 12ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0360 - val_loss: 0.0268 - lr: 1.0000e-05 - 695ms/epoch - 12ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0360 - val_loss: 0.0277 - lr: 1.0000e-05 - 693ms/epoch - 12ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0361 - val_loss: 0.0284 - lr: 1.0000e-05 - 687ms/epoch - 12ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0335 - val_loss: 0.0292 - lr: 1.0000e-05 - 689ms/epoch - 12ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0366 - val_loss: 0.0286 - lr: 1.0000e-05 - 709ms/epoch - 12ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0328 - val_loss: 0.0282 - lr: 1.0000e-05 - 677ms/epoch - 12ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0321 - val_loss: 0.0286 - lr: 1.0000e-05 - 678ms/epoch - 12ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0338 - val_loss: 0.0290 - lr: 1.0000e-05 - 712ms/epoch - 12ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0341 - val_loss: 0.0297 - lr: 1.0000e-05 - 692ms/epoch - 12ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0373 - val_loss: 0.0295 - lr: 1.0000e-05 - 691ms/epoch - 12ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0317 - val_loss: 0.0294 - lr: 1.0000e-05 - 704ms/epoch - 12ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0349 - val_loss: 0.0282 - lr: 1.0000e-05 - 689ms/epoch - 12ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0320 - val_loss: 0.0283 - lr: 1.0000e-05 - 682ms/epoch - 12ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0328 - val_loss: 0.0279 - lr: 1.0000e-05 - 697ms/epoch - 12ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0331 - val_loss: 0.0289 - lr: 1.0000e-05 - 681ms/epoch - 12ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0329 - val_loss: 0.0294 - lr: 1.0000e-05 - 693ms/epoch - 12ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0353 - val_loss: 0.0296 - lr: 1.0000e-05 - 718ms/epoch - 12ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0349 - val_loss: 0.0295 - lr: 1.0000e-05 - 714ms/epoch - 12ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0329 - val_loss: 0.0291 - lr: 1.0000e-05 - 661ms/epoch - 11ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0315 - val_loss: 0.0290 - lr: 1.0000e-05 - 702ms/epoch - 12ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0322 - val_loss: 0.0295 - lr: 1.0000e-05 - 695ms/epoch - 12ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0346 - val_loss: 0.0291 - lr: 1.0000e-05 - 671ms/epoch - 12ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0329 - val_loss: 0.0295 - lr: 1.0000e-05 - 707ms/epoch - 12ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0361 - val_loss: 0.0290 - lr: 1.0000e-05 - 686ms/epoch - 12ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0330 - val_loss: 0.0276 - lr: 1.0000e-05 - 682ms/epoch - 12ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0344 - val_loss: 0.0290 - lr: 1.0000e-05 - 693ms/epoch - 12ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0303 - val_loss: 0.0285 - lr: 1.0000e-05 - 707ms/epoch - 12ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0347 - val_loss: 0.0279 - lr: 1.0000e-05 - 689ms/epoch - 12ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0324 - val_loss: 0.0277 - lr: 1.0000e-05 - 678ms/epoch - 12ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0304 - val_loss: 0.0264 - lr: 1.0000e-05 - 705ms/epoch - 12ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0317 - val_loss: 0.0263 - lr: 1.0000e-05 - 729ms/epoch - 13ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0323 - val_loss: 0.0260 - lr: 1.0000e-05 - 674ms/epoch - 12ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0323 - val_loss: 0.0260 - lr: 1.0000e-05 - 693ms/epoch - 12ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0317 - val_loss: 0.0261 - lr: 1.0000e-05 - 687ms/epoch - 12ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0325 - val_loss: 0.0262 - lr: 1.0000e-05 - 683ms/epoch - 12ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.02423
58/58 - 1s - loss: 0.0368 - val_loss: 0.0254 - lr: 1.0000e-05 - 669ms/epoch - 12ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 218.84742268028705 
RMSE:	 14.79349257884314 
MAPE:	 12.049823582857737

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 55.80809642994332 
RMSE:	 7.470481673221836 
MAPE:	 6.155377787606487

WMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 29.732679793069057 
RMSE:	 5.452768085391956 
MAPE:	 4.481765047502752

DEMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	44.03% Accuracy
MSE:	 217.77293515692304 
RMSE:	 14.757131671057321 
MAPE:	 12.98122268289883

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	43.28% Accuracy
MSE:	 41.92542421552212 
RMSE:	 6.474984495388551 
MAPE:	 5.290774580380154

MIDPOINT
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 49.176657562881935 
RMSE:	 7.012607044664768 
MAPE:	 5.71406626958764
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17007.439, Time=3.43 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-13714.163, Time=6.03 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-14620.288, Time=5.33 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-16512.116, Time=12.30 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-17085.548, Time=10.79 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17009.877, Time=3.90 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17089.740, Time=7.93 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17006.211, Time=3.89 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=-17349.997, Time=19.22 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17006.024, Time=4.12 sec
 ARIMA(3,3,3)(0,0,0)[0]             : AIC=-14720.521, Time=13.46 sec
 ARIMA(2,3,3)(0,0,0)[0]             : AIC=-16599.516, Time=14.80 sec
 ARIMA(3,3,2)(0,0,0)[0] intercept   : AIC=-13110.324, Time=18.60 sec

Best model:  ARIMA(3,3,2)(0,0,0)[0]          
Total fit time: 123.835 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 2)   Log Likelihood                8700.998
Date:                Sun, 12 Dec 2021   AIC                         -17349.997
Time:                        20:55:01   BIC                         -17228.035
Sample:                             0   HQIC                        -17303.158
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          4.251e-09   2.48e-05      0.000      1.000   -4.85e-05    4.85e-05
x2          4.257e-09   2.48e-05      0.000      1.000   -4.86e-05    4.87e-05
x3          4.244e-09   2.34e-05      0.000      1.000   -4.58e-05    4.58e-05
x4             1.0000   2.37e-05   4.23e+04      0.000       1.000       1.000
x5          4.344e-09   2.35e-05      0.000      1.000    -4.6e-05     4.6e-05
x6          3.064e-09   6.26e-05   4.89e-05      1.000      -0.000       0.000
x7           4.26e-09   3.09e-05      0.000      1.000   -6.05e-05    6.05e-05
x8            -0.0001   4.28e-05     -2.782      0.005      -0.000   -3.51e-05
x9         -3.943e-09   4.01e-06     -0.001      0.999   -7.86e-06    7.85e-06
x10        -1.431e-05    9.6e-05     -0.149      0.881      -0.000       0.000
x11            0.0001   3.13e-05      3.693      0.000    5.42e-05       0.000
x12         1.616e-06   5.46e-05      0.030      0.976      -0.000       0.000
x13         4.247e-09   2.49e-05      0.000      1.000   -4.87e-05    4.87e-05
x14        -1.778e-08   5.56e-05     -0.000      1.000      -0.000       0.000
x15         4.488e-09      3e-05      0.000      1.000   -5.88e-05    5.88e-05
x16        -6.718e-09   4.66e-05     -0.000      1.000   -9.13e-05    9.13e-05
x17         3.935e-09    8.3e-06      0.000      1.000   -1.63e-05    1.63e-05
x18        -2.742e-08      0.000     -0.000      1.000      -0.000       0.000
x19         4.464e-09   4.48e-05   9.97e-05      1.000   -8.78e-05    8.78e-05
x20          4.06e-09      0.000   8.55e-06      1.000      -0.001       0.001
ar.L1         -1.2437   2.38e-08  -5.23e+07      0.000      -1.244      -1.244
ar.L2         -0.5344   9.34e-09  -5.72e+07      0.000      -0.534      -0.534
ar.L3         -0.1491   9.43e-10  -1.58e+08      0.000      -0.149      -0.149
ma.L1         -0.2521   9.13e-09  -2.76e+07      0.000      -0.252      -0.252
ma.L2         -0.7294   1.95e-08  -3.75e+07      0.000      -0.729      -0.729
sigma2      6.455e-11   6.89e-11      0.937      0.349   -7.05e-11       2e-10
===================================================================================
Ljung-Box (L1) (Q):                  30.63   Jarque-Bera (JB):           6336314.18
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            13.86
Prob(H) (two-sided):                  0.00   Kurtosis:                       436.75
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.35e+27. Standard errors may be unstable.
ARIMA order: (3, 3, 2) 

WARNING:tensorflow:Layer lstm_47 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_47 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.03428, saving model to LSTM5.h5
43/43 - 3s - loss: 0.3077 - val_loss: 0.0343 - lr: 0.0010 - 3s/epoch - 64ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.03428
43/43 - 1s - loss: 0.0935 - val_loss: 0.4037 - lr: 0.0010 - 581ms/epoch - 14ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.03428
43/43 - 1s - loss: 0.0542 - val_loss: 0.2713 - lr: 0.0010 - 572ms/epoch - 13ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.03428
43/43 - 1s - loss: 0.0455 - val_loss: 0.0906 - lr: 0.0010 - 555ms/epoch - 13ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.03428
43/43 - 1s - loss: 0.0431 - val_loss: 0.0394 - lr: 0.0010 - 575ms/epoch - 13ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.03428 to 0.03000, saving model to LSTM5.h5
43/43 - 1s - loss: 0.0375 - val_loss: 0.0300 - lr: 0.0010 - 575ms/epoch - 13ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.03000
43/43 - 1s - loss: 0.0360 - val_loss: 0.0402 - lr: 0.0010 - 551ms/epoch - 13ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.03000
43/43 - 1s - loss: 0.0490 - val_loss: 0.0333 - lr: 0.0010 - 517ms/epoch - 12ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.03000
43/43 - 1s - loss: 0.0375 - val_loss: 0.0332 - lr: 0.0010 - 533ms/epoch - 12ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.03000 to 0.00879, saving model to LSTM5.h5
43/43 - 1s - loss: 0.0308 - val_loss: 0.0088 - lr: 0.0010 - 507ms/epoch - 12ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0296 - val_loss: 0.0349 - lr: 0.0010 - 562ms/epoch - 13ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0330 - val_loss: 0.0102 - lr: 0.0010 - 520ms/epoch - 12ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0362 - val_loss: 0.0757 - lr: 0.0010 - 545ms/epoch - 13ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0340 - val_loss: 0.0229 - lr: 0.0010 - 537ms/epoch - 12ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00015: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0312 - val_loss: 0.0218 - lr: 0.0010 - 521ms/epoch - 12ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0446 - val_loss: 0.0151 - lr: 1.0000e-04 - 568ms/epoch - 13ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0304 - val_loss: 0.0128 - lr: 1.0000e-04 - 528ms/epoch - 12ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0269 - val_loss: 0.0126 - lr: 1.0000e-04 - 526ms/epoch - 12ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0258 - val_loss: 0.0138 - lr: 1.0000e-04 - 561ms/epoch - 13ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00020: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0251 - val_loss: 0.0156 - lr: 1.0000e-04 - 544ms/epoch - 13ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0237 - val_loss: 0.0157 - lr: 1.0000e-05 - 528ms/epoch - 12ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0221 - val_loss: 0.0159 - lr: 1.0000e-05 - 563ms/epoch - 13ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0247 - val_loss: 0.0161 - lr: 1.0000e-05 - 509ms/epoch - 12ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0235 - val_loss: 0.0164 - lr: 1.0000e-05 - 583ms/epoch - 14ms/step
Epoch 25/500

Epoch 00025: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00025: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0232 - val_loss: 0.0167 - lr: 1.0000e-05 - 554ms/epoch - 13ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0232 - val_loss: 0.0166 - lr: 1.0000e-05 - 547ms/epoch - 13ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0259 - val_loss: 0.0165 - lr: 1.0000e-05 - 570ms/epoch - 13ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0227 - val_loss: 0.0167 - lr: 1.0000e-05 - 522ms/epoch - 12ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0229 - val_loss: 0.0169 - lr: 1.0000e-05 - 585ms/epoch - 14ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0244 - val_loss: 0.0171 - lr: 1.0000e-05 - 516ms/epoch - 12ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0254 - val_loss: 0.0170 - lr: 1.0000e-05 - 529ms/epoch - 12ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0251 - val_loss: 0.0171 - lr: 1.0000e-05 - 541ms/epoch - 13ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0233 - val_loss: 0.0175 - lr: 1.0000e-05 - 551ms/epoch - 13ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0240 - val_loss: 0.0175 - lr: 1.0000e-05 - 563ms/epoch - 13ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0241 - val_loss: 0.0179 - lr: 1.0000e-05 - 560ms/epoch - 13ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0212 - val_loss: 0.0181 - lr: 1.0000e-05 - 536ms/epoch - 12ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0236 - val_loss: 0.0179 - lr: 1.0000e-05 - 555ms/epoch - 13ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0229 - val_loss: 0.0179 - lr: 1.0000e-05 - 524ms/epoch - 12ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0239 - val_loss: 0.0185 - lr: 1.0000e-05 - 555ms/epoch - 13ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0232 - val_loss: 0.0185 - lr: 1.0000e-05 - 542ms/epoch - 13ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0251 - val_loss: 0.0188 - lr: 1.0000e-05 - 550ms/epoch - 13ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0221 - val_loss: 0.0186 - lr: 1.0000e-05 - 572ms/epoch - 13ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0240 - val_loss: 0.0185 - lr: 1.0000e-05 - 556ms/epoch - 13ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0206 - val_loss: 0.0190 - lr: 1.0000e-05 - 566ms/epoch - 13ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0223 - val_loss: 0.0194 - lr: 1.0000e-05 - 556ms/epoch - 13ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0218 - val_loss: 0.0198 - lr: 1.0000e-05 - 545ms/epoch - 13ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0209 - val_loss: 0.0200 - lr: 1.0000e-05 - 519ms/epoch - 12ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0227 - val_loss: 0.0203 - lr: 1.0000e-05 - 528ms/epoch - 12ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0245 - val_loss: 0.0205 - lr: 1.0000e-05 - 549ms/epoch - 13ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0214 - val_loss: 0.0205 - lr: 1.0000e-05 - 529ms/epoch - 12ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0231 - val_loss: 0.0201 - lr: 1.0000e-05 - 584ms/epoch - 14ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0250 - val_loss: 0.0200 - lr: 1.0000e-05 - 523ms/epoch - 12ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0239 - val_loss: 0.0207 - lr: 1.0000e-05 - 566ms/epoch - 13ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0239 - val_loss: 0.0210 - lr: 1.0000e-05 - 548ms/epoch - 13ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0217 - val_loss: 0.0206 - lr: 1.0000e-05 - 518ms/epoch - 12ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0237 - val_loss: 0.0213 - lr: 1.0000e-05 - 526ms/epoch - 12ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0205 - val_loss: 0.0224 - lr: 1.0000e-05 - 551ms/epoch - 13ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0213 - val_loss: 0.0226 - lr: 1.0000e-05 - 522ms/epoch - 12ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0214 - val_loss: 0.0227 - lr: 1.0000e-05 - 581ms/epoch - 14ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00879
43/43 - 1s - loss: 0.0235 - val_loss: 0.0221 - lr: 1.0000e-05 - 519ms/epoch - 12ms/step
Epoch 00060: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 218.84742268028705 
RMSE:	 14.79349257884314 
MAPE:	 12.049823582857737

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 55.80809642994332 
RMSE:	 7.470481673221836 
MAPE:	 6.155377787606487

WMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 29.732679793069057 
RMSE:	 5.452768085391956 
MAPE:	 4.481765047502752

DEMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	44.03% Accuracy
MSE:	 217.77293515692304 
RMSE:	 14.757131671057321 
MAPE:	 12.98122268289883

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	43.28% Accuracy
MSE:	 41.92542421552212 
RMSE:	 6.474984495388551 
MAPE:	 5.290774580380154

MIDPOINT
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 49.176657562881935 
RMSE:	 7.012607044664768 
MAPE:	 5.71406626958764

T3
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	43.28% Accuracy
MSE:	 267.63588995886505 
RMSE:	 16.359580983596892 
MAPE:	 13.903459389418241
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16996.849, Time=3.64 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14177.794, Time=2.13 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16779.945, Time=8.00 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14417.099, Time=12.15 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16996.773, Time=4.01 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-14470.746, Time=10.43 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16999.230, Time=3.53 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14413.099, Time=15.50 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-16992.097, Time=4.83 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-16997.225, Time=3.38 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 67.622 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8522.615
Date:                Sun, 12 Dec 2021   AIC                         -16999.230
Time:                        21:04:48   BIC                         -16891.341
Sample:                             0   HQIC                        -16957.796
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1           2.33e-15      0.001   2.87e-12      1.000      -0.002       0.002
x2         -4.502e-16      0.000  -1.15e-12      1.000      -0.001       0.001
x3          3.943e-17      0.001   5.53e-14      1.000      -0.001       0.001
x4             1.0000      0.001   1486.752      0.000       0.999       1.001
x5         -1.326e-14      0.001  -2.01e-11      1.000      -0.001       0.001
x6         -7.238e-16   6.02e-05   -1.2e-11      1.000      -0.000       0.000
x7          4.644e-16      0.000   1.63e-12      1.000      -0.001       0.001
x8            -0.0003   6.84e-05     -4.783      0.000      -0.000      -0.000
x9          4.956e-16      0.001   8.09e-13      1.000      -0.001       0.001
x10        -5.078e-05      0.000     -0.169      0.866      -0.001       0.001
x11            0.0005   8.52e-05      5.342      0.000       0.000       0.001
x12        -6.163e-05   6.76e-05     -0.912      0.362      -0.000    7.08e-05
x13        -6.225e-17      0.000  -1.81e-13      1.000      -0.001       0.001
x14         2.723e-16      0.000   1.71e-12      1.000      -0.000       0.000
x15         2.531e-13    9.1e-05   2.78e-09      1.000      -0.000       0.000
x16        -3.448e-13      0.000  -1.94e-09      1.000      -0.000       0.000
x17         1.188e-12      0.000   1.15e-08      1.000      -0.000       0.000
x18        -5.746e-14      0.000  -5.12e-10      1.000      -0.000       0.000
x19        -2.336e-13      0.000  -2.29e-09      1.000      -0.000       0.000
x20        -9.777e-15      0.000  -9.27e-11      1.000      -0.000       0.000
ma.L1         -1.3477   4.17e-08  -3.23e+07      0.000      -1.348      -1.348
ma.L2          0.3862   8.11e-08   4.76e+06      0.000       0.386       0.386
sigma2          1e-10   7.38e-11      1.355      0.175   -4.46e-11    2.45e-10
===================================================================================
Ljung-Box (L1) (Q):                  50.19   Jarque-Bera (JB):           4788158.62
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.04   Skew:                           -10.02
Prob(H) (two-sided):                  0.00   Kurtosis:                       380.29
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 6.4e+24. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

WARNING:tensorflow:Layer lstm_48 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_48 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.03169, saving model to LSTM5.h5
90/90 - 3s - loss: 0.1177 - val_loss: 0.0317 - lr: 0.0010 - 3s/epoch - 36ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0777 - val_loss: 0.5584 - lr: 0.0010 - 1s/epoch - 13ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0567 - val_loss: 0.0987 - lr: 0.0010 - 1s/epoch - 12ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0722 - val_loss: 0.0377 - lr: 0.0010 - 1s/epoch - 13ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0542 - val_loss: 0.4369 - lr: 0.0010 - 1s/epoch - 13ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0382 - val_loss: 0.1058 - lr: 0.0010 - 1s/epoch - 12ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0335 - val_loss: 0.0803 - lr: 1.0000e-04 - 1s/epoch - 13ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0323 - val_loss: 0.0612 - lr: 1.0000e-04 - 1s/epoch - 12ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0314 - val_loss: 0.0511 - lr: 1.0000e-04 - 1s/epoch - 13ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0299 - val_loss: 0.0471 - lr: 1.0000e-04 - 1s/epoch - 12ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0261 - val_loss: 0.0420 - lr: 1.0000e-04 - 1s/epoch - 13ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0270 - val_loss: 0.0410 - lr: 1.0000e-05 - 1s/epoch - 13ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0263 - val_loss: 0.0405 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0268 - val_loss: 0.0393 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0235 - val_loss: 0.0391 - lr: 1.0000e-05 - 1s/epoch - 13ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0258 - val_loss: 0.0397 - lr: 1.0000e-05 - 1s/epoch - 13ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0269 - val_loss: 0.0380 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0256 - val_loss: 0.0373 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0248 - val_loss: 0.0376 - lr: 1.0000e-05 - 1s/epoch - 13ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0243 - val_loss: 0.0381 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0238 - val_loss: 0.0363 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0252 - val_loss: 0.0378 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0245 - val_loss: 0.0392 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0259 - val_loss: 0.0397 - lr: 1.0000e-05 - 1s/epoch - 13ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0247 - val_loss: 0.0391 - lr: 1.0000e-05 - 1s/epoch - 13ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0253 - val_loss: 0.0386 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0285 - val_loss: 0.0378 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0278 - val_loss: 0.0373 - lr: 1.0000e-05 - 1s/epoch - 13ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0250 - val_loss: 0.0356 - lr: 1.0000e-05 - 1s/epoch - 13ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0240 - val_loss: 0.0357 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0230 - val_loss: 0.0361 - lr: 1.0000e-05 - 1s/epoch - 13ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0262 - val_loss: 0.0374 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0274 - val_loss: 0.0378 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0250 - val_loss: 0.0371 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0259 - val_loss: 0.0384 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0229 - val_loss: 0.0370 - lr: 1.0000e-05 - 1s/epoch - 13ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0247 - val_loss: 0.0370 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0242 - val_loss: 0.0373 - lr: 1.0000e-05 - 1s/epoch - 13ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0240 - val_loss: 0.0377 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0236 - val_loss: 0.0357 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0230 - val_loss: 0.0361 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0228 - val_loss: 0.0358 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0251 - val_loss: 0.0394 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0229 - val_loss: 0.0393 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0249 - val_loss: 0.0392 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0246 - val_loss: 0.0391 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0273 - val_loss: 0.0378 - lr: 1.0000e-05 - 1s/epoch - 13ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0236 - val_loss: 0.0381 - lr: 1.0000e-05 - 1s/epoch - 13ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0236 - val_loss: 0.0384 - lr: 1.0000e-05 - 1s/epoch - 13ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0237 - val_loss: 0.0403 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.03169
90/90 - 1s - loss: 0.0249 - val_loss: 0.0390 - lr: 1.0000e-05 - 1s/epoch - 12ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 218.84742268028705 
RMSE:	 14.79349257884314 
MAPE:	 12.049823582857737

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 55.80809642994332 
RMSE:	 7.470481673221836 
MAPE:	 6.155377787606487

WMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 29.732679793069057 
RMSE:	 5.452768085391956 
MAPE:	 4.481765047502752

DEMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	44.03% Accuracy
MSE:	 217.77293515692304 
RMSE:	 14.757131671057321 
MAPE:	 12.98122268289883

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	43.28% Accuracy
MSE:	 41.92542421552212 
RMSE:	 6.474984495388551 
MAPE:	 5.290774580380154

MIDPOINT
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 49.176657562881935 
RMSE:	 7.012607044664768 
MAPE:	 5.71406626958764

T3
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	43.28% Accuracy
MSE:	 267.63588995886505 
RMSE:	 16.359580983596892 
MAPE:	 13.903459389418241

TEMA
Prediction vs Close:		51.12% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 54.19993303236338 
RMSE:	 7.362060379565179 
MAPE:	 6.351731050288764
Runtime: mins: 1.1107777968766663

Architecture Used

In [ ]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving Experiment5.png to Experiment5 (3).png
In [ ]:
img = cv2.imread('Experiment6.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[ ]:
<matplotlib.image.AxesImage at 0x7f4c22e8c510>

Model Plots

In [165]:
with open('simulation5_data.json') as json_file:
    simulation5 = json.load(json_file)
fileimg = 'Experiment5'
In [166]:
for i in range(len(list(simulation5.keys()))):
  SIM = list(simulation5.keys())[i]
  plot_train(simulation5,SIM)
  plot_test(simulation5,SIM)
----- Train RMSE for SMA ----- 7.987564473981076
----- Train_MSE_LSTM for SMA ----- 63.801186226004575
----- Train MAE LSTM for SMA ----- 6.937391257458735
----- Test RMSE for SMA----- 14.79349257884314
----- Test_MSE_LSTM for SMA----- 218.84742268028705
----- Test_MAE_LSTM for SMA----- 12.049823582857737
----- Train RMSE for EMA ----- 9.285313379771722
----- Train_MSE_LSTM for EMA ----- 86.21704456056776
----- Train MAE LSTM for EMA ----- 8.068863589743078
----- Test RMSE for EMA----- 7.470481673221836
----- Test_MSE_LSTM for EMA----- 55.80809642994332
----- Test_MAE_LSTM for EMA----- 6.155377787606487
----- Train RMSE for WMA ----- 9.513866242248712
----- Train_MSE_LSTM for WMA ----- 90.51365087539962
----- Train MAE LSTM for WMA ----- 8.427476138172159
----- Test RMSE for WMA----- 5.452768085391956
----- Test_MSE_LSTM for WMA----- 29.732679793069057
----- Test_MAE_LSTM for WMA----- 4.481765047502752
----- Train RMSE for DEMA ----- 10.930836950788706
----- Train_MSE_LSTM for DEMA ----- 119.48319644472774
----- Train MAE LSTM for DEMA ----- 9.670853995935332
----- Test RMSE for DEMA----- 14.757131671057321
----- Test_MSE_LSTM for DEMA----- 217.77293515692304
----- Test_MAE_LSTM for DEMA----- 12.98122268289883
----- Train RMSE for KAMA ----- 9.334973586327665
----- Train_MSE_LSTM for KAMA ----- 87.14173185743519
----- Train MAE LSTM for KAMA ----- 8.362587355349287
----- Test RMSE for KAMA----- 6.474984495388551
----- Test_MSE_LSTM for KAMA----- 41.92542421552212
----- Test_MAE_LSTM for KAMA----- 5.290774580380154
----- Train RMSE for MIDPOINT ----- 8.34973583868917
----- Train_MSE_LSTM for MIDPOINT ----- 69.71808857589033
----- Train MAE LSTM for MIDPOINT ----- 7.404580064337196
----- Test RMSE for MIDPOINT----- 7.012607044664768
----- Test_MSE_LSTM for MIDPOINT----- 49.176657562881935
----- Test_MAE_LSTM for MIDPOINT----- 5.71406626958764
----- Train RMSE for T3 ----- 10.977729013572553
----- Train_MSE_LSTM for T3 ----- 120.51053429543263
----- Train MAE LSTM for T3 ----- 9.880825724747806
----- Test RMSE for T3----- 16.359580983596892
----- Test_MSE_LSTM for T3----- 267.63588995886505
----- Test_MAE_LSTM for T3----- 13.903459389418241
----- Train RMSE for TEMA ----- 6.910268535796713
----- Train_MSE_LSTM for TEMA ----- 47.75181123682204
----- Train MAE LSTM for TEMA ----- 4.726986691457281
----- Test RMSE for TEMA----- 7.362060379565179
----- Test_MSE_LSTM for TEMA----- 54.19993303236338
----- Test_MAE_LSTM for TEMA----- 6.351731050288764

Arima w Exogenous Variable Multistep MutiVariate LSTM Hybrid Model Experiment 6

In [ ]:
def get_arima_exog(dataframe,original_data, train_len, test_len):    
    

    # prepare train and test data for exogenous vr
    X_value = pd.DataFrame(low_vol.iloc[:, :])
    y_value = pd.DataFrame(low_vol.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    X_scale_dataset = X_scaler.fit_transform(X_value)
    y_scale_dataset = y_scaler.fit_transform(y_value)
    # Get data and check shape
    # X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X_scale_dataset)
    y_train, y_test, = split_train_test(y_scale_dataset)
    yc_train,yc_test = split_train_test(low_vol_data)
    yc = yc_test.values.tolist()
    y_train_list = y_train.flatten().tolist()
    y_test_list = y_test.flatten().tolist()
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)

    # Initialize model
    model = auto_arima(y_train_list,exogenous  = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
            suppress_warnings=True,stepwise=True,seasonal=True)

      # Determine model parameters
    print(model.summary())
    model.fit(y_train_list,maxiter=200)
    order = model.get_params()['order']
    print('ARIMA order:', order, '\n')

      # Genereate predictions
    prediction = []
    for i in range(len(y_test_list)):
        model = pmdarima.ARIMA(order=order)
        model.fit(y_train_list)
        # print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')

        prediction.append(model.predict()[0])
        y_train_list.append(y_test_list[i])

    predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
    y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))

    # Generate error data
    mse = mean_squared_error(yc_test, predictionte)
    rmse = mse ** 0.5
    mae = mean_absolute_error(y_test_ , predictionte )
    return yc,predictionte.flatten().tolist(), mse, rmse, mae
In [ ]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det =20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # Option 1
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    # model.add(Dense(units=64,activation='relu'))
    # model.add(Dropout(0.5))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    # ## Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()


    # # option 2
    model = Sequential()
    model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    model.add(Dense(64))
    model.add(Dense(units=output_dim))
    model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM6.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()

    # Option 3
    # define custom activation
    # 
    # class Double_Tanh(Activation):
    #     def __init__(self, activation, **kwargs):
    #         super(Double_Tanh, self).__init__(activation, **kwargs)
    #         self.__name__ = 'double_tanh'

    # def double_tanh(x):
    #     return (K.tanh(x) * 2)

    # get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
    #     # Model Generation
    # model = Sequential()
    # #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    # model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    # model.add(Dense(1))
    # model.add(Activation(double_tanh))
    # model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
    # model.add(LSTM(units=int(lstm_len/2)))
    # model.add(Dense(1, activation='sigmoid'))
    # model.compile(loss='mean_squared_error', optimizer='adam')
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [ ]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation6 = {}
    imgfile = 'Experiment6'
    for ma in optimized_period:
                print(ma)
                print(functions[ma])
                print ( int( optimized_period[ma]))
              # if ma == 'SMA':
                low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
                low_vol = low_vol.fillna(0)
                low_vol_data = df['close']
                high_vol = pd.DataFrame()
                df2 = df.copy()
                for i in df2.columns:
                  if i in low_vol.columns:
                    high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
                high_vol_data = df['close']
                ## *****************************************************
                # Generate ARIMA and LSTM predictions
                print('\nWorking on ' + ma + ' predictions')
                try:
                  print('parameters used : ', train_len, test_len)
                  low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
                except:
                    print('ARIMA error, skipping to next MA type')
                    continue
                Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
                final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
                mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
                rmse_ftr = mse_ftr ** 0.5
                mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
                mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

                final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
                mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
                rmse = mse ** 0.5
                mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                # Generate prediction accuracy
                actual = df['close'].tail(test_len).values
                result_1 = []
                result_2 = []
                for i in range(1, len(final_prediction)):
                    # Compare prediction to previous close price
                    if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                        result_1.append(1)
                    elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                        result_1.append(1)
                    else:
                        result_1.append(0)

                    # Compare prediction to previous prediction
                    if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                        result_2.append(1)
                    elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                        result_2.append(1)
                    else:
                        result_2.append(0)

                accuracy_1 = np.mean(result_1)
                accuracy_2 = np.mean(result_2)

                simulation6[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                              'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                  'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                              'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                  'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                              'rmse': rmse_ftr, 'mae' : mae_ftr},
                                  'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                            'rmse': rmse, 'mae': mae },
                                  'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

                # save simulation data here as checkpoint
                with open('simulation6_data.json', 'w') as fp:
                    json.dump(simulation6, fp)

                for ma in simulation6.keys():
                    print('\n' + ma)
                    print('Prediction vs Close:\t\t' + str(round(100*simulation6[ma]['accuracy']['prediction vs close'], 2))
                          + '% Accuracy')
                    print('Prediction vs Prediction:\t' + str(round(100*simulation6[ma]['accuracy']['prediction vs prediction'], 2))
                          + '% Accuracy')
                    print('MSE:\t', simulation6[ma]['final']['mse'],
                          '\nRMSE:\t', simulation6[ma]['final']['rmse'],
                          '\nMAPE:\t', simulation6[ma]['final']['mae'])#,
                          # '\nMAPE:\t', simulation[ma]['final']['mape'])
              # else:
              #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/3600)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-15057.252, Time=5.42 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-13616.841, Time=2.92 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15177.809, Time=11.05 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14725.568, Time=12.12 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-15511.840, Time=16.05 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-15663.563, Time=16.95 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-15093.498, Time=7.98 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15194.504, Time=11.90 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=-14885.340, Time=20.98 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 105.406 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood                7855.782
Date:                Sun, 12 Dec 2021   AIC                         -15663.563
Time:                        21:18:05   BIC                         -15550.983
Sample:                             0   HQIC                        -15620.328
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -1.202e-05   4.78e-05     -0.251      0.801      -0.000    8.17e-05
x2         -1.202e-05   2.63e-05     -0.458      0.647   -6.35e-05    3.95e-05
x3          -1.21e-05      0.000     -0.118      0.906      -0.000       0.000
x4             1.0000   3.59e-05   2.79e+04      0.000       1.000       1.000
x5         -1.149e-05   3.47e-05     -0.332      0.740   -7.94e-05    5.65e-05
x6         -1.354e-05   2.94e-05     -0.461      0.645   -7.11e-05     4.4e-05
x7         -1.198e-05   3.25e-06     -3.693      0.000   -1.83e-05   -5.62e-06
x8             0.0027   9.17e-06    293.847      0.000       0.003       0.003
x9         -8.458e-07      0.000     -0.006      0.995      -0.000       0.000
x10            0.0005      0.000      1.213      0.225      -0.000       0.001
x11           -0.0027   4.93e-05    -54.454      0.000      -0.003      -0.003
x12            0.0007   3.53e-05     19.122      0.000       0.001       0.001
x13        -1.207e-05   2.16e-05     -0.559      0.576   -5.44e-05    3.03e-05
x14        -3.571e-05   1.38e-05     -2.581      0.010   -6.28e-05   -8.59e-06
x15        -1.308e-05   2.71e-06     -4.820      0.000   -1.84e-05   -7.76e-06
x16         -1.12e-05   4.71e-05     -0.238      0.812      -0.000    8.11e-05
x17        -1.059e-05   1.48e-05     -0.715      0.474   -3.96e-05    1.84e-05
x18         -2.03e-05   5.97e-05     -0.340      0.734      -0.000    9.68e-05
x19        -1.389e-05   3.69e-05     -0.376      0.707   -8.63e-05    5.85e-05
x20         2.105e-05      0.000      0.107      0.915      -0.000       0.000
ar.L1         -1.1996   4.09e-05  -2.93e+04      0.000      -1.200      -1.200
ar.L2         -0.8995   1.54e-05  -5.82e+04      0.000      -0.900      -0.899
ar.L3         -0.3999   1.46e-05  -2.74e+04      0.000      -0.400      -0.400
sigma2      2.425e-10   7.55e-11      3.213      0.001    9.46e-11     3.9e-10
===================================================================================
Ljung-Box (L1) (Q):                  14.46   Jarque-Bera (JB):           2454147.19
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            -3.95
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.38
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.88e+20. Standard errors may be unstable.
ARIMA order: (3, 3, 0) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04653, saving model to LSTM6.h5
48/48 - 5s - loss: 0.1650 - accuracy: 0.0000e+00 - val_loss: 0.0465 - val_accuracy: 0.0037 - lr: 0.0010 - 5s/epoch - 105ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.04653 to 0.02488, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0246 - accuracy: 0.0000e+00 - val_loss: 0.0249 - val_accuracy: 0.0037 - lr: 0.0010 - 381ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.02488
48/48 - 0s - loss: 0.0121 - accuracy: 0.0000e+00 - val_loss: 0.0330 - val_accuracy: 0.0037 - lr: 0.0010 - 378ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.02488 to 0.00434, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0090 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 0.0010 - 417ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00434
48/48 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0167 - val_accuracy: 0.0037 - lr: 0.0010 - 364ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00434
48/48 - 0s - loss: 0.0043 - accuracy: 0.0000e+00 - val_loss: 0.0097 - val_accuracy: 0.0037 - lr: 0.0010 - 381ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00434
48/48 - 0s - loss: 0.0061 - accuracy: 0.0000e+00 - val_loss: 0.0266 - val_accuracy: 0.0037 - lr: 0.0010 - 358ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00434
48/48 - 0s - loss: 0.0188 - accuracy: 0.0000e+00 - val_loss: 0.0266 - val_accuracy: 0.0037 - lr: 0.0010 - 356ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00009: val_loss did not improve from 0.00434
48/48 - 0s - loss: 0.0371 - accuracy: 0.0000e+00 - val_loss: 0.0119 - val_accuracy: 0.0037 - lr: 0.0010 - 379ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00434
48/48 - 0s - loss: 0.0201 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 371ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00434
48/48 - 0s - loss: 0.0030 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 360ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00434
48/48 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 353ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00434
48/48 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 365ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.00434 to 0.00399, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 401ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.00399 to 0.00375, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 381ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.00375 to 0.00364, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 435ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.00364 to 0.00363, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 387ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00363
48/48 - 0s - loss: 9.6609e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 383ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00363
48/48 - 0s - loss: 9.3871e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 368ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00363
48/48 - 0s - loss: 9.2101e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 355ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00021: val_loss did not improve from 0.00363
48/48 - 0s - loss: 9.0860e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 379ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.9782e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 359ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.9423e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 361ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.9203e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 377ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.9057e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 370ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00026: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.8942e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 358ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.8840e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 356ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.8743e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 370ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.8649e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 370ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.8554e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 352ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.8459e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 350ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.8362e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 372ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.8263e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 360ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.8162e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 354ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.8060e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 369ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.7955e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 357ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.7848e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 364ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.7739e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 365ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.7628e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 347ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.7515e-04 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 393ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.7400e-04 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 361ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.7283e-04 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 349ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.7163e-04 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 363ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.7042e-04 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 357ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.6918e-04 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 365ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.6792e-04 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 372ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.6664e-04 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 355ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.6534e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 372ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.6402e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 370ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.6268e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 366ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.6132e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 382ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.5994e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 359ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.5854e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 353ms/epoch - 7ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.5712e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 377ms/epoch - 8ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.5568e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 363ms/epoch - 8ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.5422e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 373ms/epoch - 8ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.5274e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 370ms/epoch - 8ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.5124e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 449ms/epoch - 9ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.4973e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 378ms/epoch - 8ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.4819e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 354ms/epoch - 7ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.4665e-04 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 375ms/epoch - 8ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.4508e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 367ms/epoch - 8ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.4350e-04 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 344ms/epoch - 7ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.4190e-04 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 381ms/epoch - 8ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.4029e-04 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 365ms/epoch - 8ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.3867e-04 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 352ms/epoch - 7ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.00363
48/48 - 0s - loss: 8.3703e-04 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 375ms/epoch - 8ms/step
Epoch 00067: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 63.854129927017006 
RMSE:	 7.990877919666713 
MAPE:	 6.455960052106778
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17007.807, Time=3.36 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14576.593, Time=5.07 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15585.734, Time=9.60 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14574.593, Time=8.22 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15458.426, Time=12.33 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15621.247, Time=13.62 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17231.605, Time=22.27 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14570.593, Time=10.31 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-16761.093, Time=17.81 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-13173.936, Time=33.49 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 136.109 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8638.803
Date:                Sun, 12 Dec 2021   AIC                         -17231.605
Time:                        21:23:52   BIC                         -17123.716
Sample:                             0   HQIC                        -17190.171
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -5.101e-09   4.36e-05     -0.000      1.000   -8.54e-05    8.54e-05
x2         -5.085e-09   4.35e-05     -0.000      1.000   -8.53e-05    8.53e-05
x3          -5.12e-09   4.36e-05     -0.000      1.000   -8.56e-05    8.55e-05
x4             1.0000   4.36e-05   2.29e+04      0.000       1.000       1.000
x5         -4.635e-09   4.15e-05     -0.000      1.000   -8.14e-05    8.14e-05
x6         -1.766e-08   7.54e-05     -0.000      1.000      -0.000       0.000
x7         -5.054e-09   4.34e-05     -0.000      1.000    -8.5e-05     8.5e-05
x8         -4.941e-09   4.29e-05     -0.000      1.000   -8.41e-05    8.41e-05
x9         -3.138e-10   8.71e-06   -3.6e-05      1.000   -1.71e-05    1.71e-05
x10        -1.002e-09   1.85e-05  -5.41e-05      1.000   -3.63e-05    3.63e-05
x11        -4.879e-09   4.26e-05     -0.000      1.000   -8.36e-05    8.36e-05
x12        -4.991e-09   4.31e-05     -0.000      1.000   -8.46e-05    8.45e-05
x13        -5.099e-09   4.36e-05     -0.000      1.000   -8.54e-05    8.54e-05
x14        -3.925e-08      0.000     -0.000      1.000      -0.000       0.000
x15        -4.597e-09   4.13e-05     -0.000      1.000    -8.1e-05     8.1e-05
x16        -1.164e-08    6.6e-05     -0.000      1.000      -0.000       0.000
x17        -4.702e-09   4.19e-05     -0.000      1.000   -8.22e-05    8.22e-05
x18        -8.297e-10   1.65e-05  -5.02e-05      1.000   -3.24e-05    3.24e-05
x19        -5.725e-09   4.61e-05     -0.000      1.000   -9.04e-05    9.04e-05
x20        -5.511e-09   4.28e-05     -0.000      1.000    -8.4e-05    8.39e-05
ma.L1         -1.3891   1.96e-08  -7.08e+07      0.000      -1.389      -1.389
ma.L2          0.4027   2.02e-08   1.99e+07      0.000       0.403       0.403
sigma2      7.547e-11   6.92e-11      1.091      0.275   -6.01e-11    2.11e-10
===================================================================================
Ljung-Box (L1) (Q):                  67.97   Jarque-Bera (JB):           6306943.47
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            12.31
Prob(H) (two-sided):                  0.00   Kurtosis:                       435.93
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.3e+24. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.22007, saving model to LSTM6.h5
16/16 - 5s - loss: 0.1719 - accuracy: 0.0000e+00 - val_loss: 0.2201 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 5s/epoch - 305ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.22007 to 0.00692, saving model to LSTM6.h5
16/16 - 0s - loss: 0.1083 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 0.0010 - 188ms/epoch - 12ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.00692
16/16 - 0s - loss: 0.0348 - accuracy: 0.0000e+00 - val_loss: 0.0648 - val_accuracy: 0.0037 - lr: 0.0010 - 156ms/epoch - 10ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.00692
16/16 - 0s - loss: 0.0142 - accuracy: 0.0000e+00 - val_loss: 0.0105 - val_accuracy: 0.0037 - lr: 0.0010 - 144ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00692
16/16 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0111 - val_accuracy: 0.0037 - lr: 0.0010 - 143ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.00692 to 0.00682, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 0.0010 - 161ms/epoch - 10ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.00682 to 0.00643, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 0.0010 - 189ms/epoch - 12ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00643
16/16 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 0.0010 - 154ms/epoch - 10ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.00643 to 0.00615, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 0.0010 - 215ms/epoch - 13ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.00615 to 0.00591, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 0.0010 - 169ms/epoch - 11ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.00591 to 0.00586, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 0.0010 - 164ms/epoch - 10ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.00586 to 0.00583, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 0.0010 - 191ms/epoch - 12ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.00583 to 0.00580, saving model to LSTM6.h5
16/16 - 0s - loss: 9.7517e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 0.0010 - 196ms/epoch - 12ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.00580 to 0.00579, saving model to LSTM6.h5
16/16 - 0s - loss: 9.4913e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 0.0010 - 180ms/epoch - 11ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00579
16/16 - 0s - loss: 9.2361e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 0.0010 - 146ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00579
16/16 - 0s - loss: 9.0189e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 0.0010 - 138ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.8403e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 0.0010 - 136ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00018: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.6866e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 0.0010 - 139ms/epoch - 9ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.5327e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 156ms/epoch - 10ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.5135e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 158ms/epoch - 10ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.5010e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 152ms/epoch - 10ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4888e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 140ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00023: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4768e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 143ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4644e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 178ms/epoch - 11ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4632e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 187ms/epoch - 12ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4619e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 168ms/epoch - 10ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4606e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 144ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00028: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4593e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4579e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 177ms/epoch - 11ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4566e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 141ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4551e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 156ms/epoch - 10ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4537e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 167ms/epoch - 10ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4523e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4508e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 139ms/epoch - 9ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4492e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 140ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4477e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4461e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 142ms/epoch - 9ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4445e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4429e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 160ms/epoch - 10ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4413e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 144ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4396e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4379e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4362e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 139ms/epoch - 9ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4345e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 154ms/epoch - 10ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4327e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 143ms/epoch - 9ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4309e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 162ms/epoch - 10ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4291e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 141ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4273e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 142ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4254e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 144ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4235e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4216e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 139ms/epoch - 9ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4197e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 167ms/epoch - 10ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4177e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 175ms/epoch - 11ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4158e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 139ms/epoch - 9ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4138e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 140ms/epoch - 9ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4117e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4097e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 152ms/epoch - 9ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4076e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 162ms/epoch - 10ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4055e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 156ms/epoch - 10ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4034e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 136ms/epoch - 9ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.4013e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 140ms/epoch - 9ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.3991e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 181ms/epoch - 11ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.3969e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 159ms/epoch - 10ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.00579
16/16 - 0s - loss: 8.3947e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step
Epoch 00064: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 63.854129927017006 
RMSE:	 7.990877919666713 
MAPE:	 6.455960052106778

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 60.00625951694821 
RMSE:	 7.7463707319588195 
MAPE:	 6.477662803945572
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-15462.744, Time=15.41 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-13144.103, Time=2.94 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16179.868, Time=7.16 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14670.350, Time=14.73 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-15643.233, Time=20.76 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15673.437, Time=18.87 sec
 ARIMA(1,3,0)(0,0,0)[0] intercept   : AIC=-15494.535, Time=8.24 sec

Best model:  ARIMA(1,3,0)(0,0,0)[0]          
Total fit time: 88.122 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(1, 3, 0)   Log Likelihood                8111.934
Date:                Sun, 12 Dec 2021   AIC                         -16179.868
Time:                        21:31:17   BIC                         -16076.670
Sample:                             0   HQIC                        -16140.236
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -1.474e-05      0.000     -0.048      0.961      -0.001       0.001
x2         -1.471e-05      0.000     -0.041      0.967      -0.001       0.001
x3         -1.475e-05      0.000     -0.072      0.943      -0.000       0.000
x4             1.0000      0.000   3644.383      0.000       0.999       1.001
x5         -1.405e-05      0.000     -0.051      0.960      -0.001       0.001
x6         -2.487e-05   4.39e-05     -0.567      0.571      -0.000    6.11e-05
x7         -1.467e-05      0.000     -0.134      0.893      -0.000       0.000
x8             0.0004      0.000      3.240      0.001       0.000       0.001
x9          3.739e-06      0.001      0.003      0.998      -0.003       0.003
x10           -0.0006      0.001     -0.447      0.655      -0.003       0.002
x11            0.0024   2.31e-05    105.301      0.000       0.002       0.002
x12           -0.0019      0.000     -7.274      0.000      -0.002      -0.001
x13        -1.473e-05      0.000     -0.113      0.910      -0.000       0.000
x14        -4.124e-05      0.000     -0.135      0.893      -0.001       0.001
x15        -1.347e-05      0.000     -0.095      0.924      -0.000       0.000
x16        -2.422e-05      0.000     -0.100      0.920      -0.000       0.000
x17        -1.471e-05      0.000     -0.112      0.911      -0.000       0.000
x18         2.884e-06      0.000      0.006      0.995      -0.001       0.001
x19        -1.493e-05      0.000     -0.105      0.916      -0.000       0.000
x20         3.469e-06      0.000      0.007      0.994      -0.001       0.001
ar.L1         -0.6665   6.84e-05  -9743.045      0.000      -0.667      -0.666
sigma2      1.498e-10   7.34e-11      2.042      0.041    6.03e-12    2.94e-10
===================================================================================
Ljung-Box (L1) (Q):                  89.34   Jarque-Bera (JB):           3270298.31
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.18
Prob(H) (two-sided):                  0.00   Kurtosis:                       315.08
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.61e+19. Standard errors may be unstable.
ARIMA order: (1, 3, 0) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.17569, saving model to LSTM6.h5
17/17 - 5s - loss: 0.1233 - accuracy: 0.0000e+00 - val_loss: 0.1757 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 5s/epoch - 310ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.17569 to 0.04607, saving model to LSTM6.h5
17/17 - 0s - loss: 0.0676 - accuracy: 0.0000e+00 - val_loss: 0.0461 - val_accuracy: 0.0037 - lr: 0.0010 - 177ms/epoch - 10ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.04607 to 0.01079, saving model to LSTM6.h5
17/17 - 0s - loss: 0.0091 - accuracy: 0.0000e+00 - val_loss: 0.0108 - val_accuracy: 0.0037 - lr: 0.0010 - 171ms/epoch - 10ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.01079
17/17 - 0s - loss: 0.0063 - accuracy: 0.0000e+00 - val_loss: 0.0188 - val_accuracy: 0.0037 - lr: 0.0010 - 187ms/epoch - 11ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.01079
17/17 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0168 - val_accuracy: 0.0037 - lr: 0.0010 - 159ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.01079
17/17 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0131 - val_accuracy: 0.0037 - lr: 0.0010 - 167ms/epoch - 10ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.01079
17/17 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0122 - val_accuracy: 0.0037 - lr: 0.0010 - 151ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00008: val_loss did not improve from 0.01079
17/17 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0119 - val_accuracy: 0.0037 - lr: 0.0010 - 150ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.01079
17/17 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0139 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 160ms/epoch - 9ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.4240e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 154ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.3401e-04 - accuracy: 0.0000e+00 - val_loss: 0.0141 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 157ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.2048e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 164ms/epoch - 10ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00013: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.1633e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 150ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.1320e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.1285e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 184ms/epoch - 11ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.1250e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.1214e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 139ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00018: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.1178e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 178ms/epoch - 10ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.1141e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 153ms/epoch - 9ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.1104e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.1067e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 156ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.1029e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0991e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 190ms/epoch - 11ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0953e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 154ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0914e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 168ms/epoch - 10ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0875e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0836e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0796e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0757e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 153ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0717e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 175ms/epoch - 10ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0677e-04 - accuracy: 0.0000e+00 - val_loss: 0.0148 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 170ms/epoch - 10ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0637e-04 - accuracy: 0.0000e+00 - val_loss: 0.0148 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 153ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0596e-04 - accuracy: 0.0000e+00 - val_loss: 0.0148 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 163ms/epoch - 10ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0556e-04 - accuracy: 0.0000e+00 - val_loss: 0.0148 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 152ms/epoch - 9ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0515e-04 - accuracy: 0.0000e+00 - val_loss: 0.0149 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0474e-04 - accuracy: 0.0000e+00 - val_loss: 0.0149 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 161ms/epoch - 9ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0433e-04 - accuracy: 0.0000e+00 - val_loss: 0.0149 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 204ms/epoch - 12ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0392e-04 - accuracy: 0.0000e+00 - val_loss: 0.0149 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0350e-04 - accuracy: 0.0000e+00 - val_loss: 0.0150 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 155ms/epoch - 9ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0308e-04 - accuracy: 0.0000e+00 - val_loss: 0.0150 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 188ms/epoch - 11ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0266e-04 - accuracy: 0.0000e+00 - val_loss: 0.0150 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 162ms/epoch - 10ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0223e-04 - accuracy: 0.0000e+00 - val_loss: 0.0151 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 158ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0181e-04 - accuracy: 0.0000e+00 - val_loss: 0.0151 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 168ms/epoch - 10ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0138e-04 - accuracy: 0.0000e+00 - val_loss: 0.0151 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 157ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0095e-04 - accuracy: 0.0000e+00 - val_loss: 0.0152 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 142ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0051e-04 - accuracy: 0.0000e+00 - val_loss: 0.0152 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 144ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01079
17/17 - 0s - loss: 9.0007e-04 - accuracy: 0.0000e+00 - val_loss: 0.0152 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01079
17/17 - 0s - loss: 8.9963e-04 - accuracy: 0.0000e+00 - val_loss: 0.0152 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01079
17/17 - 0s - loss: 8.9919e-04 - accuracy: 0.0000e+00 - val_loss: 0.0153 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 169ms/epoch - 10ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01079
17/17 - 0s - loss: 8.9874e-04 - accuracy: 0.0000e+00 - val_loss: 0.0153 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 154ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01079
17/17 - 0s - loss: 8.9829e-04 - accuracy: 0.0000e+00 - val_loss: 0.0153 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01079
17/17 - 0s - loss: 8.9783e-04 - accuracy: 0.0000e+00 - val_loss: 0.0154 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01079
17/17 - 0s - loss: 8.9737e-04 - accuracy: 0.0000e+00 - val_loss: 0.0154 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 140ms/epoch - 8ms/step
Epoch 00053: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 63.854129927017006 
RMSE:	 7.990877919666713 
MAPE:	 6.455960052106778

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 60.00625951694821 
RMSE:	 7.7463707319588195 
MAPE:	 6.477662803945572

WMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 56.69687583727807 
RMSE:	 7.529732786578689 
MAPE:	 6.079114892920341
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17007.773, Time=3.60 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14576.593, Time=5.12 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16293.727, Time=8.77 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14574.593, Time=7.71 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16647.994, Time=10.39 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15621.952, Time=11.80 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16876.201, Time=12.07 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17032.019, Time=7.23 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17006.612, Time=3.66 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17089.440, Time=7.75 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=17.27 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17005.977, Time=3.87 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-17000.665, Time=4.53 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 103.787 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.720
Date:                Sun, 12 Dec 2021   AIC                         -17089.440
Time:                        21:34:09   BIC                         -16972.169
Sample:                             0   HQIC                        -17044.403
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.799e-10   1.36e-20  -2.06e+10      0.000    -2.8e-10    -2.8e-10
x2         -2.816e-10   1.37e-20  -2.06e+10      0.000   -2.82e-10   -2.82e-10
x3         -2.804e-10   1.36e-20  -2.06e+10      0.000    -2.8e-10    -2.8e-10
x4             1.0000   1.36e-20   7.33e+19      0.000       1.000       1.000
x5         -2.598e-10   1.31e-20  -1.98e+10      0.000    -2.6e-10    -2.6e-10
x6         -1.388e-09   2.97e-20  -4.67e+10      0.000   -1.39e-09   -1.39e-09
x7         -2.788e-10   1.36e-20  -2.05e+10      0.000   -2.79e-10   -2.79e-10
x8         -2.761e-10   1.35e-20  -2.04e+10      0.000   -2.76e-10   -2.76e-10
x9          -2.22e-12   3.36e-22  -6.61e+09      0.000   -2.22e-12   -2.22e-12
x10        -1.345e-10   9.36e-21  -1.44e+10      0.000   -1.34e-10   -1.34e-10
x11        -2.898e-10   1.38e-20  -2.09e+10      0.000    -2.9e-10    -2.9e-10
x12        -2.602e-10   1.31e-20  -1.98e+10      0.000    -2.6e-10    -2.6e-10
x13        -2.807e-10   1.36e-20  -2.06e+10      0.000   -2.81e-10   -2.81e-10
x14         -1.87e-09   3.52e-20  -5.31e+10      0.000   -1.87e-09   -1.87e-09
x15        -2.767e-10   1.37e-20  -2.03e+10      0.000   -2.77e-10   -2.77e-10
x16        -8.184e-11   7.33e-21  -1.12e+10      0.000   -8.18e-11   -8.18e-11
x17        -2.407e-10   1.27e-20   -1.9e+10      0.000   -2.41e-10   -2.41e-10
x18        -6.412e-10   2.06e-20  -3.11e+10      0.000   -6.41e-10   -6.41e-10
x19        -2.915e-10   1.39e-20   -2.1e+10      0.000   -2.92e-10   -2.92e-10
x20        -4.337e-10   1.69e-20  -2.56e+10      0.000   -4.34e-10   -4.34e-10
ar.L1         -0.4924   1.46e-22  -3.38e+21      0.000      -0.492      -0.492
ar.L2         -0.1923   8.47e-23  -2.27e+21      0.000      -0.192      -0.192
ar.L3         -0.0461   4.02e-23  -1.15e+21      0.000      -0.046      -0.046
ma.L1         -0.7078   3.31e-22  -2.14e+21      0.000      -0.708      -0.708
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  55.12   Jarque-Bera (JB):           4171061.36
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.27
Prob(H) (two-sided):                  0.00   Kurtosis:                       355.48
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.88e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05846, saving model to LSTM6.h5
10/10 - 5s - loss: 0.1879 - accuracy: 0.0000e+00 - val_loss: 0.0585 - val_accuracy: 0.0037 - lr: 0.0010 - 5s/epoch - 523ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.05846 to 0.01286, saving model to LSTM6.h5
10/10 - 0s - loss: 0.0721 - accuracy: 0.0000e+00 - val_loss: 0.0129 - val_accuracy: 0.0037 - lr: 0.0010 - 124ms/epoch - 12ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01286
10/10 - 0s - loss: 0.0232 - accuracy: 0.0000e+00 - val_loss: 0.0828 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 98ms/epoch - 10ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.01286
10/10 - 0s - loss: 0.0083 - accuracy: 0.0000e+00 - val_loss: 0.0383 - val_accuracy: 0.0037 - lr: 0.0010 - 95ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.01286
10/10 - 0s - loss: 0.0043 - accuracy: 0.0000e+00 - val_loss: 0.0515 - val_accuracy: 0.0037 - lr: 0.0010 - 102ms/epoch - 10ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.01286
10/10 - 0s - loss: 0.0030 - accuracy: 0.0000e+00 - val_loss: 0.0220 - val_accuracy: 0.0037 - lr: 0.0010 - 102ms/epoch - 10ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.01286
10/10 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0357 - val_accuracy: 0.0037 - lr: 0.0010 - 120ms/epoch - 12ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.01286
10/10 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0331 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 127ms/epoch - 13ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.01286
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0298 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 110ms/epoch - 11ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.8005e-04 - accuracy: 0.0000e+00 - val_loss: 0.0283 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 110ms/epoch - 11ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.8770e-04 - accuracy: 0.0000e+00 - val_loss: 0.0287 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 103ms/epoch - 10ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.7660e-04 - accuracy: 0.0000e+00 - val_loss: 0.0298 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 99ms/epoch - 10ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6659e-04 - accuracy: 0.0000e+00 - val_loss: 0.0299 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 98ms/epoch - 10ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6661e-04 - accuracy: 0.0000e+00 - val_loss: 0.0300 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 11ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6671e-04 - accuracy: 0.0000e+00 - val_loss: 0.0300 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 118ms/epoch - 12ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6686e-04 - accuracy: 0.0000e+00 - val_loss: 0.0301 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 110ms/epoch - 11ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6702e-04 - accuracy: 0.0000e+00 - val_loss: 0.0301 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 10ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6720e-04 - accuracy: 0.0000e+00 - val_loss: 0.0302 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 115ms/epoch - 12ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6737e-04 - accuracy: 0.0000e+00 - val_loss: 0.0302 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 11ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6753e-04 - accuracy: 0.0000e+00 - val_loss: 0.0303 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 140ms/epoch - 14ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6768e-04 - accuracy: 0.0000e+00 - val_loss: 0.0303 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 10ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6782e-04 - accuracy: 0.0000e+00 - val_loss: 0.0304 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 10ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6794e-04 - accuracy: 0.0000e+00 - val_loss: 0.0304 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 104ms/epoch - 10ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6805e-04 - accuracy: 0.0000e+00 - val_loss: 0.0305 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 119ms/epoch - 12ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6813e-04 - accuracy: 0.0000e+00 - val_loss: 0.0305 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 115ms/epoch - 11ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6821e-04 - accuracy: 0.0000e+00 - val_loss: 0.0306 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 98ms/epoch - 10ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6826e-04 - accuracy: 0.0000e+00 - val_loss: 0.0306 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 10ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6831e-04 - accuracy: 0.0000e+00 - val_loss: 0.0307 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 110ms/epoch - 11ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6834e-04 - accuracy: 0.0000e+00 - val_loss: 0.0307 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6835e-04 - accuracy: 0.0000e+00 - val_loss: 0.0307 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 11ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6836e-04 - accuracy: 0.0000e+00 - val_loss: 0.0308 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 118ms/epoch - 12ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6835e-04 - accuracy: 0.0000e+00 - val_loss: 0.0308 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 126ms/epoch - 13ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6834e-04 - accuracy: 0.0000e+00 - val_loss: 0.0308 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 120ms/epoch - 12ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6831e-04 - accuracy: 0.0000e+00 - val_loss: 0.0309 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6828e-04 - accuracy: 0.0000e+00 - val_loss: 0.0309 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 106ms/epoch - 11ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6824e-04 - accuracy: 0.0000e+00 - val_loss: 0.0310 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6819e-04 - accuracy: 0.0000e+00 - val_loss: 0.0310 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6813e-04 - accuracy: 0.0000e+00 - val_loss: 0.0310 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 119ms/epoch - 12ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6807e-04 - accuracy: 0.0000e+00 - val_loss: 0.0311 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 10ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6800e-04 - accuracy: 0.0000e+00 - val_loss: 0.0311 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6791e-04 - accuracy: 0.0000e+00 - val_loss: 0.0312 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 112ms/epoch - 11ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6783e-04 - accuracy: 0.0000e+00 - val_loss: 0.0312 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 123ms/epoch - 12ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6773e-04 - accuracy: 0.0000e+00 - val_loss: 0.0312 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 116ms/epoch - 12ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6763e-04 - accuracy: 0.0000e+00 - val_loss: 0.0313 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 112ms/epoch - 11ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6751e-04 - accuracy: 0.0000e+00 - val_loss: 0.0313 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 98ms/epoch - 10ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6740e-04 - accuracy: 0.0000e+00 - val_loss: 0.0313 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 104ms/epoch - 10ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6727e-04 - accuracy: 0.0000e+00 - val_loss: 0.0314 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6713e-04 - accuracy: 0.0000e+00 - val_loss: 0.0314 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 11ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6699e-04 - accuracy: 0.0000e+00 - val_loss: 0.0314 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 106ms/epoch - 11ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6684e-04 - accuracy: 0.0000e+00 - val_loss: 0.0315 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 165ms/epoch - 16ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6668e-04 - accuracy: 0.0000e+00 - val_loss: 0.0315 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01286
10/10 - 0s - loss: 9.6651e-04 - accuracy: 0.0000e+00 - val_loss: 0.0315 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 11ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 63.854129927017006 
RMSE:	 7.990877919666713 
MAPE:	 6.455960052106778

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 60.00625951694821 
RMSE:	 7.7463707319588195 
MAPE:	 6.477662803945572

WMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 56.69687583727807 
RMSE:	 7.529732786578689 
MAPE:	 6.079114892920341

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 115.62227637424353 
RMSE:	 10.752779937032262 
MAPE:	 9.572797111202712
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17007.733, Time=3.37 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14576.593, Time=5.10 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16469.294, Time=9.39 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14574.593, Time=8.08 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16346.513, Time=10.24 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16569.862, Time=11.75 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16356.870, Time=17.92 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17033.457, Time=6.57 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17006.582, Time=3.69 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17089.434, Time=7.61 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=-15789.397, Time=14.49 sec
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-15386.395, Time=26.04 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=47.433, Time=7.57 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 131.852 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.717
Date:                Sun, 12 Dec 2021   AIC                         -17089.434
Time:                        21:48:03   BIC                         -16972.163
Sample:                             0   HQIC                        -17044.397
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.222e-10   9.26e-21   -2.4e+10      0.000   -2.22e-10   -2.22e-10
x2         -2.175e-10   9.18e-21  -2.37e+10      0.000   -2.18e-10   -2.18e-10
x3         -2.088e-10   8.98e-21  -2.33e+10      0.000   -2.09e-10   -2.09e-10
x4             1.0000   9.08e-21    1.1e+20      0.000       1.000       1.000
x5         -1.927e-10   8.64e-21  -2.23e+10      0.000   -1.93e-10   -1.93e-10
x6          -1.33e-09   2.17e-20  -6.14e+10      0.000   -1.33e-09   -1.33e-09
x7         -2.053e-10   8.93e-21   -2.3e+10      0.000   -2.05e-10   -2.05e-10
x8         -1.999e-10   8.84e-21  -2.26e+10      0.000      -2e-10      -2e-10
x9           -3.6e-11   1.09e-21  -3.29e+10      0.000    -3.6e-11    -3.6e-11
x10        -9.188e-11   3.87e-21  -2.37e+10      0.000   -9.19e-11   -9.19e-11
x11        -2.014e-10   8.86e-21  -2.27e+10      0.000   -2.01e-10   -2.01e-10
x12        -1.994e-10   8.77e-21  -2.27e+10      0.000   -1.99e-10   -1.99e-10
x13        -2.115e-10   9.05e-21  -2.34e+10      0.000   -2.12e-10   -2.12e-10
x14        -1.723e-09    2.6e-20  -6.63e+10      0.000   -1.72e-09   -1.72e-09
x15        -2.116e-10    9.1e-21  -2.33e+10      0.000   -2.12e-10   -2.12e-10
x16        -3.169e-10   1.11e-20  -2.85e+10      0.000   -3.17e-10   -3.17e-10
x17        -1.804e-10    8.4e-21  -2.15e+10      0.000    -1.8e-10    -1.8e-10
x18        -1.463e-10   7.54e-21  -1.94e+10      0.000   -1.46e-10   -1.46e-10
x19        -2.598e-10   1.01e-20  -2.58e+10      0.000    -2.6e-10    -2.6e-10
x20        -3.922e-10   1.24e-20  -3.18e+10      0.000   -3.92e-10   -3.92e-10
ar.L1         -0.4926   1.44e-22  -3.42e+21      0.000      -0.493      -0.493
ar.L2         -0.1937    8.6e-23  -2.25e+21      0.000      -0.194      -0.194
ar.L3         -0.0441   3.86e-23  -1.14e+21      0.000      -0.044      -0.044
ma.L1         -0.7085    3.3e-22  -2.15e+21      0.000      -0.709      -0.709
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  57.24   Jarque-Bera (JB):           3956070.89
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                             5.16
Prob(H) (two-sided):                  0.00   Kurtosis:                       346.28
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 5.5e+39. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.16046, saving model to LSTM6.h5
45/45 - 5s - loss: 0.1373 - accuracy: 0.0000e+00 - val_loss: 0.1605 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 5s/epoch - 110ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.16046 to 0.02408, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0606 - accuracy: 0.0000e+00 - val_loss: 0.0241 - val_accuracy: 0.0037 - lr: 0.0010 - 369ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.02408
45/45 - 0s - loss: 0.0158 - accuracy: 0.0000e+00 - val_loss: 0.0451 - val_accuracy: 0.0037 - lr: 0.0010 - 339ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.02408 to 0.00695, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0101 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 0.0010 - 385ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00695
45/45 - 0s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0210 - val_accuracy: 0.0037 - lr: 0.0010 - 366ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00695
45/45 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 0.0010 - 338ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00695
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0152 - val_accuracy: 0.0037 - lr: 0.0010 - 347ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00695
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0141 - val_accuracy: 0.0037 - lr: 0.0010 - 348ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00009: val_loss did not improve from 0.00695
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0190 - val_accuracy: 0.0037 - lr: 0.0010 - 336ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00695
45/45 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0128 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 352ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00695
45/45 - 0s - loss: 9.2692e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 365ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.8928e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 352ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.6865e-04 - accuracy: 0.0000e+00 - val_loss: 0.0150 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 335ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00014: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.5812e-04 - accuracy: 0.0000e+00 - val_loss: 0.0154 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 331ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.5115e-04 - accuracy: 0.0000e+00 - val_loss: 0.0155 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 366ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.5021e-04 - accuracy: 0.0000e+00 - val_loss: 0.0156 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 335ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.4954e-04 - accuracy: 0.0000e+00 - val_loss: 0.0157 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 332ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.4897e-04 - accuracy: 0.0000e+00 - val_loss: 0.0157 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 341ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00019: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.4843e-04 - accuracy: 0.0000e+00 - val_loss: 0.0158 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 339ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.4790e-04 - accuracy: 0.0000e+00 - val_loss: 0.0158 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 330ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.4738e-04 - accuracy: 0.0000e+00 - val_loss: 0.0159 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 335ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.4686e-04 - accuracy: 0.0000e+00 - val_loss: 0.0159 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 345ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.4633e-04 - accuracy: 0.0000e+00 - val_loss: 0.0160 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 341ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.4581e-04 - accuracy: 0.0000e+00 - val_loss: 0.0161 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 337ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.4530e-04 - accuracy: 0.0000e+00 - val_loss: 0.0161 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 340ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.4478e-04 - accuracy: 0.0000e+00 - val_loss: 0.0162 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 347ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.4427e-04 - accuracy: 0.0000e+00 - val_loss: 0.0162 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 345ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.4375e-04 - accuracy: 0.0000e+00 - val_loss: 0.0163 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 339ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.4324e-04 - accuracy: 0.0000e+00 - val_loss: 0.0163 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 331ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.4274e-04 - accuracy: 0.0000e+00 - val_loss: 0.0164 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 348ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.4223e-04 - accuracy: 0.0000e+00 - val_loss: 0.0164 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 342ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.4172e-04 - accuracy: 0.0000e+00 - val_loss: 0.0165 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 339ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.4122e-04 - accuracy: 0.0000e+00 - val_loss: 0.0166 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 378ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.4071e-04 - accuracy: 0.0000e+00 - val_loss: 0.0166 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 343ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.4021e-04 - accuracy: 0.0000e+00 - val_loss: 0.0167 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 347ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.3970e-04 - accuracy: 0.0000e+00 - val_loss: 0.0167 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 350ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.3920e-04 - accuracy: 0.0000e+00 - val_loss: 0.0168 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 332ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.3869e-04 - accuracy: 0.0000e+00 - val_loss: 0.0168 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 334ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.3818e-04 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 350ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.3766e-04 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 321ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.3714e-04 - accuracy: 0.0000e+00 - val_loss: 0.0170 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 352ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.3662e-04 - accuracy: 0.0000e+00 - val_loss: 0.0171 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 353ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.3610e-04 - accuracy: 0.0000e+00 - val_loss: 0.0171 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 333ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.3556e-04 - accuracy: 0.0000e+00 - val_loss: 0.0172 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 354ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.3503e-04 - accuracy: 0.0000e+00 - val_loss: 0.0172 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 345ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.3448e-04 - accuracy: 0.0000e+00 - val_loss: 0.0173 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 333ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.3393e-04 - accuracy: 0.0000e+00 - val_loss: 0.0173 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 356ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.3337e-04 - accuracy: 0.0000e+00 - val_loss: 0.0174 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 361ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.3281e-04 - accuracy: 0.0000e+00 - val_loss: 0.0174 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 341ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.3223e-04 - accuracy: 0.0000e+00 - val_loss: 0.0175 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 374ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.3165e-04 - accuracy: 0.0000e+00 - val_loss: 0.0175 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 348ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.3106e-04 - accuracy: 0.0000e+00 - val_loss: 0.0176 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 354ms/epoch - 8ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.3046e-04 - accuracy: 0.0000e+00 - val_loss: 0.0176 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 362ms/epoch - 8ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00695
45/45 - 0s - loss: 8.2985e-04 - accuracy: 0.0000e+00 - val_loss: 0.0177 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 344ms/epoch - 8ms/step
Epoch 00054: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 63.854129927017006 
RMSE:	 7.990877919666713 
MAPE:	 6.455960052106778

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 60.00625951694821 
RMSE:	 7.7463707319588195 
MAPE:	 6.477662803945572

WMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 56.69687583727807 
RMSE:	 7.529732786578689 
MAPE:	 6.079114892920341

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 115.62227637424353 
RMSE:	 10.752779937032262 
MAPE:	 9.572797111202712

KAMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 37.71755854324582 
RMSE:	 6.141462247970415 
MAPE:	 4.988005540782208
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.792, Time=3.65 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14576.592, Time=5.19 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16618.742, Time=8.51 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14574.592, Time=7.77 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-17004.301, Time=3.93 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15715.779, Time=22.60 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=inf, Time=3.79 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17007.442, Time=3.87 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17188.392, Time=16.65 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17002.377, Time=4.23 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=-16356.269, Time=14.96 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 95.191 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood                8618.196
Date:                Sun, 12 Dec 2021   AIC                         -17188.392
Time:                        22:01:05   BIC                         -17075.812
Sample:                             0   HQIC                        -17145.157
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -3.582e-10   2.18e-20  -1.64e+10      0.000   -3.58e-10   -3.58e-10
x2         -3.575e-10   2.25e-20  -1.59e+10      0.000   -3.57e-10   -3.57e-10
x3         -3.653e-10   2.09e-20  -1.75e+10      0.000   -3.65e-10   -3.65e-10
x4             1.0000   2.18e-20   4.59e+19      0.000       1.000       1.000
x5         -3.252e-10   2.07e-20  -1.57e+10      0.000   -3.25e-10   -3.25e-10
x6         -7.157e-09   1.78e-19  -4.03e+10      0.000   -7.16e-09   -7.16e-09
x7          -3.29e-10   2.09e-20  -1.58e+10      0.000   -3.29e-10   -3.29e-10
x8          -3.28e-10   2.12e-20  -1.54e+10      0.000   -3.28e-10   -3.28e-10
x9         -1.775e-10   1.29e-21  -1.37e+11      0.000   -1.77e-10   -1.77e-10
x10         -2.94e-10    5.5e-21  -5.34e+10      0.000   -2.94e-10   -2.94e-10
x11        -3.247e-10   2.11e-20  -1.54e+10      0.000   -3.25e-10   -3.25e-10
x12        -3.357e-10   2.11e-20  -1.59e+10      0.000   -3.36e-10   -3.36e-10
x13         -3.46e-10   2.14e-20  -1.62e+10      0.000   -3.46e-10   -3.46e-10
x14        -2.825e-09   6.25e-20  -4.52e+10      0.000   -2.82e-09   -2.82e-09
x15        -3.957e-10   2.33e-20  -1.69e+10      0.000   -3.96e-10   -3.96e-10
x16        -2.548e-10   1.87e-20  -1.36e+10      0.000   -2.55e-10   -2.55e-10
x17        -2.495e-10   1.85e-20  -1.35e+10      0.000   -2.49e-10   -2.49e-10
x18        -1.073e-09   3.84e-20  -2.79e+10      0.000   -1.07e-09   -1.07e-09
x19        -4.343e-10   2.45e-20  -1.78e+10      0.000   -4.34e-10   -4.34e-10
x20        -1.047e-09   3.78e-20  -2.77e+10      0.000   -1.05e-09   -1.05e-09
ar.L1         -1.2157   8.99e-23  -1.35e+22      0.000      -1.216      -1.216
ar.L2         -0.9187   9.81e-23  -9.36e+21      0.000      -0.919      -0.919
ar.L3         -0.4095   9.98e-23   -4.1e+21      0.000      -0.409      -0.409
sigma2      7.969e-11   6.92e-11      1.151      0.250    -5.6e-11    2.15e-10
===================================================================================
Ljung-Box (L1) (Q):                   2.47   Jarque-Bera (JB):             15463.35
Prob(Q):                              0.12   Prob(JB):                         0.00
Heteroskedasticity (H):               0.35   Skew:                             0.62
Prob(H) (two-sided):                  0.00   Kurtosis:                        24.44
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.74e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 0) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.10075, saving model to LSTM6.h5
58/58 - 5s - loss: 0.1317 - accuracy: 0.0000e+00 - val_loss: 0.1007 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 5s/epoch - 89ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.10075 to 0.01617, saving model to LSTM6.h5
58/58 - 0s - loss: 0.0317 - accuracy: 0.0000e+00 - val_loss: 0.0162 - val_accuracy: 0.0037 - lr: 0.0010 - 466ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01617
58/58 - 0s - loss: 0.0048 - accuracy: 0.0000e+00 - val_loss: 0.0227 - val_accuracy: 0.0037 - lr: 0.0010 - 442ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.01617 to 0.00905, saving model to LSTM6.h5
58/58 - 0s - loss: 0.0128 - accuracy: 0.0000e+00 - val_loss: 0.0091 - val_accuracy: 0.0037 - lr: 0.0010 - 473ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00905
58/58 - 0s - loss: 0.0176 - accuracy: 0.0000e+00 - val_loss: 0.0199 - val_accuracy: 0.0037 - lr: 0.0010 - 424ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00905
58/58 - 0s - loss: 0.0367 - accuracy: 0.0000e+00 - val_loss: 0.0122 - val_accuracy: 0.0037 - lr: 0.0010 - 434ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.00905 to 0.00466, saving model to LSTM6.h5
58/58 - 0s - loss: 0.0278 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 0.0010 - 488ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.00466 to 0.00342, saving model to LSTM6.h5
58/58 - 0s - loss: 0.0062 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 0.0010 - 463ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00342
58/58 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 0.0010 - 449ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00342
58/58 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 0.0010 - 431ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00342
58/58 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 0.0010 - 446ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00342
58/58 - 1s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 0.0010 - 514ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00013: val_loss did not improve from 0.00342
58/58 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 0.0010 - 438ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00342
58/58 - 0s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 443ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00342
58/58 - 0s - loss: 9.1594e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 429ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.9031e-04 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 438ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.5332e-04 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 441ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00018: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.3464e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 436ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.2738e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 443ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.2074e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 429ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00342
58/58 - 1s - loss: 8.1722e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 525ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.1518e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 429ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00023: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.1378e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 439ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00342
58/58 - 1s - loss: 8.1265e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 510ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.1165e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 458ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.1071e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 434ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.0981e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 444ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.0892e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 428ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.0805e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 450ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.0719e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 432ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.0634e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 433ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.0549e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 431ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.0465e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 498ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.0381e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 433ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.0297e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 425ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.0214e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 442ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00342
58/58 - 0s - loss: 8.0131e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 428ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00342
58/58 - 1s - loss: 8.0048e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 514ms/epoch - 9ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00342
58/58 - 0s - loss: 7.9965e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 433ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00342
58/58 - 0s - loss: 7.9881e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 433ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00342
58/58 - 1s - loss: 7.9798e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 505ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00342
58/58 - 0s - loss: 7.9715e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 447ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00342
58/58 - 0s - loss: 7.9631e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 431ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00342
58/58 - 0s - loss: 7.9546e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 426ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00342
58/58 - 0s - loss: 7.9461e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 443ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00342
58/58 - 0s - loss: 7.9375e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 422ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00342
58/58 - 1s - loss: 7.9289e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 522ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00342
58/58 - 0s - loss: 7.9201e-04 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 419ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00342
58/58 - 1s - loss: 7.9113e-04 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 532ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00342
58/58 - 0s - loss: 7.9024e-04 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 442ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00342
58/58 - 0s - loss: 7.8933e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 447ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00342
58/58 - 0s - loss: 7.8841e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 421ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00342
58/58 - 0s - loss: 7.8748e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 441ms/epoch - 8ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00342
58/58 - 0s - loss: 7.8654e-04 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 420ms/epoch - 7ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00342
58/58 - 1s - loss: 7.8558e-04 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 511ms/epoch - 9ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00342
58/58 - 0s - loss: 7.8461e-04 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 439ms/epoch - 8ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00342
58/58 - 1s - loss: 7.8363e-04 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 508ms/epoch - 9ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00342
58/58 - 0s - loss: 7.8263e-04 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 415ms/epoch - 7ms/step
Epoch 00058: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 63.854129927017006 
RMSE:	 7.990877919666713 
MAPE:	 6.455960052106778

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 60.00625951694821 
RMSE:	 7.7463707319588195 
MAPE:	 6.477662803945572

WMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 56.69687583727807 
RMSE:	 7.529732786578689 
MAPE:	 6.079114892920341

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 115.62227637424353 
RMSE:	 10.752779937032262 
MAPE:	 9.572797111202712

KAMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 37.71755854324582 
RMSE:	 6.141462247970415 
MAPE:	 4.988005540782208

MIDPOINT
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 58.98858807771727 
RMSE:	 7.680402859076942 
MAPE:	 6.28601448834674
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17007.439, Time=3.65 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-13714.163, Time=6.02 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-14620.288, Time=5.33 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-16512.116, Time=12.39 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-17085.548, Time=10.12 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17009.877, Time=3.38 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17089.740, Time=7.90 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17006.211, Time=3.94 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=-17349.997, Time=19.12 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17006.024, Time=4.10 sec
 ARIMA(3,3,3)(0,0,0)[0]             : AIC=-14720.521, Time=14.25 sec
 ARIMA(2,3,3)(0,0,0)[0]             : AIC=-16599.516, Time=14.92 sec
 ARIMA(3,3,2)(0,0,0)[0] intercept   : AIC=-13110.324, Time=18.84 sec

Best model:  ARIMA(3,3,2)(0,0,0)[0]          
Total fit time: 123.992 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 2)   Log Likelihood                8700.998
Date:                Sun, 12 Dec 2021   AIC                         -17349.997
Time:                        22:06:32   BIC                         -17228.035
Sample:                             0   HQIC                        -17303.158
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          4.251e-09   2.48e-05      0.000      1.000   -4.85e-05    4.85e-05
x2          4.257e-09   2.48e-05      0.000      1.000   -4.86e-05    4.87e-05
x3          4.244e-09   2.34e-05      0.000      1.000   -4.58e-05    4.58e-05
x4             1.0000   2.37e-05   4.23e+04      0.000       1.000       1.000
x5          4.344e-09   2.35e-05      0.000      1.000    -4.6e-05     4.6e-05
x6          3.064e-09   6.26e-05   4.89e-05      1.000      -0.000       0.000
x7           4.26e-09   3.09e-05      0.000      1.000   -6.05e-05    6.05e-05
x8            -0.0001   4.28e-05     -2.782      0.005      -0.000   -3.51e-05
x9         -3.943e-09   4.01e-06     -0.001      0.999   -7.86e-06    7.85e-06
x10        -1.431e-05    9.6e-05     -0.149      0.881      -0.000       0.000
x11            0.0001   3.13e-05      3.693      0.000    5.42e-05       0.000
x12         1.616e-06   5.46e-05      0.030      0.976      -0.000       0.000
x13         4.247e-09   2.49e-05      0.000      1.000   -4.87e-05    4.87e-05
x14        -1.778e-08   5.56e-05     -0.000      1.000      -0.000       0.000
x15         4.488e-09      3e-05      0.000      1.000   -5.88e-05    5.88e-05
x16        -6.718e-09   4.66e-05     -0.000      1.000   -9.13e-05    9.13e-05
x17         3.935e-09    8.3e-06      0.000      1.000   -1.63e-05    1.63e-05
x18        -2.742e-08      0.000     -0.000      1.000      -0.000       0.000
x19         4.464e-09   4.48e-05   9.97e-05      1.000   -8.78e-05    8.78e-05
x20          4.06e-09      0.000   8.55e-06      1.000      -0.001       0.001
ar.L1         -1.2437   2.38e-08  -5.23e+07      0.000      -1.244      -1.244
ar.L2         -0.5344   9.34e-09  -5.72e+07      0.000      -0.534      -0.534
ar.L3         -0.1491   9.43e-10  -1.58e+08      0.000      -0.149      -0.149
ma.L1         -0.2521   9.13e-09  -2.76e+07      0.000      -0.252      -0.252
ma.L2         -0.7294   1.95e-08  -3.75e+07      0.000      -0.729      -0.729
sigma2      6.455e-11   6.89e-11      0.937      0.349   -7.05e-11       2e-10
===================================================================================
Ljung-Box (L1) (Q):                  30.63   Jarque-Bera (JB):           6336314.18
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            13.86
Prob(H) (two-sided):                  0.00   Kurtosis:                       436.75
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.35e+27. Standard errors may be unstable.
ARIMA order: (3, 3, 2) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.10064, saving model to LSTM6.h5
43/43 - 6s - loss: 0.1186 - accuracy: 0.0000e+00 - val_loss: 0.1006 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 6s/epoch - 132ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.10064 to 0.01455, saving model to LSTM6.h5
43/43 - 0s - loss: 0.0729 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 0.0010 - 363ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01455
43/43 - 0s - loss: 0.0163 - accuracy: 0.0000e+00 - val_loss: 0.0469 - val_accuracy: 0.0037 - lr: 0.0010 - 341ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.01455 to 0.00504, saving model to LSTM6.h5
43/43 - 0s - loss: 0.0115 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 0.0010 - 382ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00504
43/43 - 0s - loss: 0.0034 - accuracy: 0.0000e+00 - val_loss: 0.0221 - val_accuracy: 0.0037 - lr: 0.0010 - 330ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.00504 to 0.00354, saving model to LSTM6.h5
43/43 - 0s - loss: 0.0036 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 0.0010 - 358ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00354
43/43 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0160 - val_accuracy: 0.0037 - lr: 0.0010 - 363ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00354
43/43 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 0.0010 - 338ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00354
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0128 - val_accuracy: 0.0037 - lr: 0.0010 - 350ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00354
43/43 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 0.0010 - 346ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00011: val_loss did not improve from 0.00354
43/43 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0111 - val_accuracy: 0.0037 - lr: 0.0010 - 322ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00354
43/43 - 0s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 351ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00354
43/43 - 0s - loss: 9.3016e-04 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 349ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00354
43/43 - 0s - loss: 9.9814e-04 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 332ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00354
43/43 - 0s - loss: 9.5143e-04 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 348ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00016: val_loss did not improve from 0.00354
43/43 - 0s - loss: 9.3894e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 334ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00354
43/43 - 0s - loss: 9.0303e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 328ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.8993e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 351ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.8216e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 328ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.7790e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 327ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00021: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.7527e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 368ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.7339e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 332ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.7187e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 328ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.7053e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 363ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.6927e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 334ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.6805e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 330ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.6684e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 340ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.6563e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 328ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.6440e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 324ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.6316e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 372ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.6190e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 336ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.6062e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 336ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.5932e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 360ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.5801e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 334ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.5667e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 328ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.5531e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 346ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.5394e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 334ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.5255e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 342ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.5113e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 359ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.4970e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 343ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.4825e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 365ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.4678e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 334ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.4529e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 335ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.4379e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 372ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.4227e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 324ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.4073e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 341ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.3917e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 345ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.3760e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 336ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.3601e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 329ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.3440e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 347ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.3278e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 329ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.3114e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 345ms/epoch - 8ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.2949e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 354ms/epoch - 8ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.2783e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 326ms/epoch - 8ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.2615e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 326ms/epoch - 8ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00354
43/43 - 0s - loss: 8.2445e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 349ms/epoch - 8ms/step
Epoch 00056: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 63.854129927017006 
RMSE:	 7.990877919666713 
MAPE:	 6.455960052106778

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 60.00625951694821 
RMSE:	 7.7463707319588195 
MAPE:	 6.477662803945572

WMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 56.69687583727807 
RMSE:	 7.529732786578689 
MAPE:	 6.079114892920341

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 115.62227637424353 
RMSE:	 10.752779937032262 
MAPE:	 9.572797111202712

KAMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 37.71755854324582 
RMSE:	 6.141462247970415 
MAPE:	 4.988005540782208

MIDPOINT
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 58.98858807771727 
RMSE:	 7.680402859076942 
MAPE:	 6.28601448834674

T3
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 144.62114230478295 
RMSE:	 12.02585308012629 
MAPE:	 9.865104713742154
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16996.849, Time=3.68 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14177.794, Time=2.13 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16779.945, Time=8.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14417.099, Time=11.61 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16996.773, Time=3.61 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-14470.746, Time=9.87 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16999.230, Time=3.68 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14413.099, Time=14.88 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-16992.097, Time=5.07 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-16997.225, Time=3.49 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 66.092 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8522.615
Date:                Sun, 12 Dec 2021   AIC                         -16999.230
Time:                        22:16:03   BIC                         -16891.341
Sample:                             0   HQIC                        -16957.796
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1           2.33e-15      0.001   2.87e-12      1.000      -0.002       0.002
x2         -4.502e-16      0.000  -1.15e-12      1.000      -0.001       0.001
x3          3.943e-17      0.001   5.53e-14      1.000      -0.001       0.001
x4             1.0000      0.001   1486.752      0.000       0.999       1.001
x5         -1.326e-14      0.001  -2.01e-11      1.000      -0.001       0.001
x6         -7.238e-16   6.02e-05   -1.2e-11      1.000      -0.000       0.000
x7          4.644e-16      0.000   1.63e-12      1.000      -0.001       0.001
x8            -0.0003   6.84e-05     -4.783      0.000      -0.000      -0.000
x9          4.956e-16      0.001   8.09e-13      1.000      -0.001       0.001
x10        -5.078e-05      0.000     -0.169      0.866      -0.001       0.001
x11            0.0005   8.52e-05      5.342      0.000       0.000       0.001
x12        -6.163e-05   6.76e-05     -0.912      0.362      -0.000    7.08e-05
x13        -6.225e-17      0.000  -1.81e-13      1.000      -0.001       0.001
x14         2.723e-16      0.000   1.71e-12      1.000      -0.000       0.000
x15         2.531e-13    9.1e-05   2.78e-09      1.000      -0.000       0.000
x16        -3.448e-13      0.000  -1.94e-09      1.000      -0.000       0.000
x17         1.188e-12      0.000   1.15e-08      1.000      -0.000       0.000
x18        -5.746e-14      0.000  -5.12e-10      1.000      -0.000       0.000
x19        -2.336e-13      0.000  -2.29e-09      1.000      -0.000       0.000
x20        -9.777e-15      0.000  -9.27e-11      1.000      -0.000       0.000
ma.L1         -1.3477   4.17e-08  -3.23e+07      0.000      -1.348      -1.348
ma.L2          0.3862   8.11e-08   4.76e+06      0.000       0.386       0.386
sigma2          1e-10   7.38e-11      1.355      0.175   -4.46e-11    2.45e-10
===================================================================================
Ljung-Box (L1) (Q):                  50.19   Jarque-Bera (JB):           4788158.62
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.04   Skew:                           -10.02
Prob(H) (two-sided):                  0.00   Kurtosis:                       380.29
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 6.4e+24. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04654, saving model to LSTM6.h5
90/90 - 6s - loss: 0.1165 - accuracy: 0.0000e+00 - val_loss: 0.0465 - val_accuracy: 0.0037 - lr: 0.0010 - 6s/epoch - 64ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04654
90/90 - 1s - loss: 0.0619 - accuracy: 0.0000e+00 - val_loss: 0.1186 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 680ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.04654 to 0.01602, saving model to LSTM6.h5
90/90 - 1s - loss: 0.0430 - accuracy: 0.0000e+00 - val_loss: 0.0160 - val_accuracy: 0.0037 - lr: 0.0010 - 692ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.01602 to 0.01437, saving model to LSTM6.h5
90/90 - 1s - loss: 0.0415 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 0.0010 - 679ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.01437 to 0.01148, saving model to LSTM6.h5
90/90 - 1s - loss: 0.0272 - accuracy: 0.0000e+00 - val_loss: 0.0115 - val_accuracy: 0.0037 - lr: 0.0010 - 670ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.01148 to 0.01029, saving model to LSTM6.h5
90/90 - 1s - loss: 0.0161 - accuracy: 0.0000e+00 - val_loss: 0.0103 - val_accuracy: 0.0037 - lr: 0.0010 - 676ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.01029 to 0.01011, saving model to LSTM6.h5
90/90 - 1s - loss: 0.0072 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 0.0010 - 675ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.01011
90/90 - 1s - loss: 0.0048 - accuracy: 0.0000e+00 - val_loss: 0.0127 - val_accuracy: 0.0037 - lr: 0.0010 - 657ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.01011
90/90 - 1s - loss: 0.0041 - accuracy: 0.0000e+00 - val_loss: 0.0158 - val_accuracy: 0.0037 - lr: 0.0010 - 651ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.01011
90/90 - 1s - loss: 0.0044 - accuracy: 0.0000e+00 - val_loss: 0.0215 - val_accuracy: 0.0037 - lr: 0.0010 - 647ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.01011
90/90 - 1s - loss: 0.0057 - accuracy: 0.0000e+00 - val_loss: 0.0271 - val_accuracy: 0.0037 - lr: 0.0010 - 642ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00012: val_loss did not improve from 0.01011
90/90 - 1s - loss: 0.0091 - accuracy: 0.0000e+00 - val_loss: 0.0328 - val_accuracy: 0.0037 - lr: 0.0010 - 677ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.01011 to 0.00591, saving model to LSTM6.h5
90/90 - 1s - loss: 0.0235 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 687ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.00591 to 0.00545, saving model to LSTM6.h5
90/90 - 1s - loss: 0.0040 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 672ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00545
90/90 - 1s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 667ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00545
90/90 - 1s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0110 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 653ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00545
90/90 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0139 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 658ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00545
90/90 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0165 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 640ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00019: val_loss did not improve from 0.00545
90/90 - 1s - loss: 9.2754e-04 - accuracy: 0.0000e+00 - val_loss: 0.0188 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 644ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00545
90/90 - 1s - loss: 8.6001e-04 - accuracy: 0.0000e+00 - val_loss: 0.0194 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 653ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00545
90/90 - 1s - loss: 8.4351e-04 - accuracy: 0.0000e+00 - val_loss: 0.0199 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 696ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00545
90/90 - 1s - loss: 8.3598e-04 - accuracy: 0.0000e+00 - val_loss: 0.0202 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 659ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00545
90/90 - 1s - loss: 8.3137e-04 - accuracy: 0.0000e+00 - val_loss: 0.0205 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 639ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00024: val_loss did not improve from 0.00545
90/90 - 1s - loss: 8.2767e-04 - accuracy: 0.0000e+00 - val_loss: 0.0207 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 707ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00545
90/90 - 1s - loss: 8.2425e-04 - accuracy: 0.0000e+00 - val_loss: 0.0209 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 660ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00545
90/90 - 1s - loss: 8.2092e-04 - accuracy: 0.0000e+00 - val_loss: 0.0211 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 650ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00545
90/90 - 1s - loss: 8.1765e-04 - accuracy: 0.0000e+00 - val_loss: 0.0214 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 647ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00545
90/90 - 1s - loss: 8.1441e-04 - accuracy: 0.0000e+00 - val_loss: 0.0216 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 677ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00545
90/90 - 1s - loss: 8.1122e-04 - accuracy: 0.0000e+00 - val_loss: 0.0218 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 689ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00545
90/90 - 1s - loss: 8.0807e-04 - accuracy: 0.0000e+00 - val_loss: 0.0221 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 701ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00545
90/90 - 1s - loss: 8.0497e-04 - accuracy: 0.0000e+00 - val_loss: 0.0223 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 653ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00545
90/90 - 1s - loss: 8.0191e-04 - accuracy: 0.0000e+00 - val_loss: 0.0226 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 694ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.9891e-04 - accuracy: 0.0000e+00 - val_loss: 0.0228 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 659ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.9595e-04 - accuracy: 0.0000e+00 - val_loss: 0.0231 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 651ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.9305e-04 - accuracy: 0.0000e+00 - val_loss: 0.0233 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 642ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.9019e-04 - accuracy: 0.0000e+00 - val_loss: 0.0236 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 659ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.8738e-04 - accuracy: 0.0000e+00 - val_loss: 0.0239 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 683ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.8463e-04 - accuracy: 0.0000e+00 - val_loss: 0.0242 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 646ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.8192e-04 - accuracy: 0.0000e+00 - val_loss: 0.0244 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 637ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.7925e-04 - accuracy: 0.0000e+00 - val_loss: 0.0247 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 657ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.7663e-04 - accuracy: 0.0000e+00 - val_loss: 0.0250 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 641ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.7406e-04 - accuracy: 0.0000e+00 - val_loss: 0.0253 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 651ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.7152e-04 - accuracy: 0.0000e+00 - val_loss: 0.0256 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 631ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.6903e-04 - accuracy: 0.0000e+00 - val_loss: 0.0258 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 657ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.6658e-04 - accuracy: 0.0000e+00 - val_loss: 0.0261 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 705ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.6417e-04 - accuracy: 0.0000e+00 - val_loss: 0.0264 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 637ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.6179e-04 - accuracy: 0.0000e+00 - val_loss: 0.0266 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 648ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.5945e-04 - accuracy: 0.0000e+00 - val_loss: 0.0269 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 654ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.5715e-04 - accuracy: 0.0000e+00 - val_loss: 0.0271 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 637ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.5489e-04 - accuracy: 0.0000e+00 - val_loss: 0.0274 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 659ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.5266e-04 - accuracy: 0.0000e+00 - val_loss: 0.0276 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 660ms/epoch - 7ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.5047e-04 - accuracy: 0.0000e+00 - val_loss: 0.0278 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 638ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.4832e-04 - accuracy: 0.0000e+00 - val_loss: 0.0281 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 669ms/epoch - 7ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.4620e-04 - accuracy: 0.0000e+00 - val_loss: 0.0283 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 661ms/epoch - 7ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.4412e-04 - accuracy: 0.0000e+00 - val_loss: 0.0285 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 646ms/epoch - 7ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.4207e-04 - accuracy: 0.0000e+00 - val_loss: 0.0287 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 700ms/epoch - 8ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.4006e-04 - accuracy: 0.0000e+00 - val_loss: 0.0289 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 679ms/epoch - 8ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.3809e-04 - accuracy: 0.0000e+00 - val_loss: 0.0291 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 632ms/epoch - 7ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.3615e-04 - accuracy: 0.0000e+00 - val_loss: 0.0292 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 651ms/epoch - 7ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.3424e-04 - accuracy: 0.0000e+00 - val_loss: 0.0294 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 643ms/epoch - 7ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.3237e-04 - accuracy: 0.0000e+00 - val_loss: 0.0296 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 689ms/epoch - 8ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.3054e-04 - accuracy: 0.0000e+00 - val_loss: 0.0297 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 644ms/epoch - 7ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.2874e-04 - accuracy: 0.0000e+00 - val_loss: 0.0298 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 682ms/epoch - 8ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.00545
90/90 - 1s - loss: 7.2697e-04 - accuracy: 0.0000e+00 - val_loss: 0.0300 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 645ms/epoch - 7ms/step
Epoch 00064: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 63.854129927017006 
RMSE:	 7.990877919666713 
MAPE:	 6.455960052106778

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 60.00625951694821 
RMSE:	 7.7463707319588195 
MAPE:	 6.477662803945572

WMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 56.69687583727807 
RMSE:	 7.529732786578689 
MAPE:	 6.079114892920341

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 115.62227637424353 
RMSE:	 10.752779937032262 
MAPE:	 9.572797111202712

KAMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 37.71755854324582 
RMSE:	 6.141462247970415 
MAPE:	 4.988005540782208

MIDPOINT
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 58.98858807771727 
RMSE:	 7.680402859076942 
MAPE:	 6.28601448834674

T3
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 144.62114230478295 
RMSE:	 12.02585308012629 
MAPE:	 9.865104713742154

TEMA
Prediction vs Close:		51.12% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 60.21739561493253 
RMSE:	 7.75998683084788 
MAPE:	 6.775089256594788
Runtime: mins: 1.1129575148169442

Architecture Used

In [ ]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving Experiment6.png to Experiment6 (2).png
In [ ]:
img = cv2.imread('Experiment6.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[ ]:
<matplotlib.image.AxesImage at 0x7f4c1f1e87d0>

Model Plots

In [167]:
with open('simulation6_data.json') as json_file:
    simulation6 = json.load(json_file)
fileimg = 'Experiment6'
In [168]:
for i in range(len(list(simulation6.keys()))):
  SIM = list(simulation6.keys())[i]
  plot_train(simulation6,SIM)
  plot_test(simulation6,SIM)
----- Train RMSE for SMA ----- 8.87101181703794
----- Train_MSE_LSTM for SMA ----- 78.6948506580268
----- Train MAE LSTM for SMA ----- 7.764835828601724
----- Test RMSE for SMA----- 7.990877919666713
----- Test_MSE_LSTM for SMA----- 63.854129927017006
----- Test_MAE_LSTM for SMA----- 6.455960052106778
----- Train RMSE for EMA ----- 10.17937239908588
----- Train_MSE_LSTM for EMA ----- 103.61962243927141
----- Train MAE LSTM for EMA ----- 9.027416054918984
----- Test RMSE for EMA----- 7.7463707319588195
----- Test_MSE_LSTM for EMA----- 60.00625951694821
----- Test_MAE_LSTM for EMA----- 6.477662803945572
----- Train RMSE for WMA ----- 10.487371340854157
----- Train_MSE_LSTM for WMA ----- 109.98495764096911
----- Train MAE LSTM for WMA ----- 9.322647111386633
----- Test RMSE for WMA----- 7.529732786578689
----- Test_MSE_LSTM for WMA----- 56.69687583727807
----- Test_MAE_LSTM for WMA----- 6.079114892920341
----- Train RMSE for DEMA ----- 12.130525100228542
----- Train_MSE_LSTM for DEMA ----- 147.14963920727467
----- Train MAE LSTM for DEMA ----- 10.904426826985949
----- Test RMSE for DEMA----- 10.752779937032262
----- Test_MSE_LSTM for DEMA----- 115.62227637424353
----- Test_MAE_LSTM for DEMA----- 9.572797111202712
----- Train RMSE for KAMA ----- 10.55508436039303
----- Train_MSE_LSTM for KAMA ----- 111.40980585501353
----- Train MAE LSTM for KAMA ----- 9.485642571073708
----- Test RMSE for KAMA----- 6.141462247970415
----- Test_MSE_LSTM for KAMA----- 37.71755854324582
----- Test_MAE_LSTM for KAMA----- 4.988005540782208
----- Train RMSE for MIDPOINT ----- 9.465450907632647
----- Train_MSE_LSTM for MIDPOINT ----- 89.59476088480369
----- Train MAE LSTM for MIDPOINT ----- 8.411639738797975
----- Test RMSE for MIDPOINT----- 7.680402859076942
----- Test_MSE_LSTM for MIDPOINT----- 58.98858807771727
----- Test_MAE_LSTM for MIDPOINT----- 6.28601448834674
----- Train RMSE for T3 ----- 12.030459071540214
----- Train_MSE_LSTM for T3 ----- 144.7319454720042
----- Train MAE LSTM for T3 ----- 10.830626645936897
----- Test RMSE for T3----- 12.02585308012629
----- Test_MSE_LSTM for T3----- 144.62114230478295
----- Test_MAE_LSTM for T3----- 9.865104713742154
----- Train RMSE for TEMA ----- 7.4517243925140235
----- Train_MSE_LSTM for TEMA ----- 55.52819642198849
----- Train MAE LSTM for TEMA ----- 5.180568682312021
----- Test RMSE for TEMA----- 7.75998683084788
----- Test_MSE_LSTM for TEMA----- 60.21739561493253
----- Test_MAE_LSTM for TEMA----- 6.775089256594788

Arima w Exogenous Variable Multistep MutiVariate LSTM Hybrid Model Experiment 7

In [ ]:
def get_arima_exog(dataframe,original_data, train_len, test_len):    
    

    # prepare train and test data for exogenous vr
    X_value = pd.DataFrame(low_vol.iloc[:, :])
    y_value = pd.DataFrame(low_vol.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    X_scale_dataset = X_scaler.fit_transform(X_value)
    y_scale_dataset = y_scaler.fit_transform(y_value)
    # Get data and check shape
    # X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X_scale_dataset)
    y_train, y_test, = split_train_test(y_scale_dataset)
    yc_train,yc_test = split_train_test(low_vol_data)
    yc = yc_test.values.tolist()
    y_train_list = y_train.flatten().tolist()
    y_test_list = y_test.flatten().tolist()
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)

    # Initialize model
    model = auto_arima(y_train_list,exogenous  = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
            suppress_warnings=True,stepwise=True,seasonal=True)

      # Determine model parameters
    print(model.summary())
    model.fit(y_train_list,maxiter=200)
    order = model.get_params()['order']
    print('ARIMA order:', order, '\n')

      # Genereate predictions
    prediction = []
    for i in range(len(y_test_list)):
        model = pmdarima.ARIMA(order=order)
        model.fit(y_train_list)
        # print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')

        prediction.append(model.predict()[0])
        y_train_list.append(y_test_list[i])

    predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
    y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))

    # Generate error data
    mse = mean_squared_error(yc_test, predictionte)
    rmse = mse ** 0.5
    mae = mean_absolute_error(y_test_ , predictionte )
    return yc,predictionte.flatten().tolist(), mse, rmse, mae
In [ ]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det =20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # Option 1
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    # model.add(Dense(units=64,activation='relu'))
    # model.add(Dropout(0.5))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    # ## Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()


    # # # option 2
    # model = Sequential()
    # model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    # model.add(Dense(64))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma+' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 3
    # define custom activation
    # 
    class Double_Tanh(Activation):
        def __init__(self, activation, **kwargs):
            super(Double_Tanh, self).__init__(activation, **kwargs)
            self.__name__ = 'double_tanh'

    def double_tanh(x):
        return (K.tanh(x) * 2)

    get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
        # Model Generation
    model = Sequential()
    #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    model.add(Dense(1))
    model.add(Activation(double_tanh))
    model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
    # model.add(LSTM(units=int(lstm_len/2)))
    # model.add(Dense(1, activation='sigmoid'))
    # model.compile(loss='mean_squared_error', optimizer='adam')
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [ ]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation7 = {}
    imgfile = 'Experiment7'
    for ma in optimized_period:
                print(ma)
                print(functions[ma])
                print ( int( optimized_period[ma]))
              # if ma == 'SMA':
                low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
                low_vol = low_vol.fillna(0)
                low_vol_data = df['close']
                high_vol = pd.DataFrame()
                df2 = df.copy()
                for i in df2.columns:
                  if i in low_vol.columns:
                    high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
                high_vol_data = df['close']
                ## *****************************************************
                # Generate ARIMA and LSTM predictions
                print('\nWorking on ' + ma + ' predictions')
                try:
                  print('parameters used : ', train_len, test_len)
                  low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
                except:
                    print('ARIMA error, skipping to next MA type')
                    continue
                Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
                final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
                mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
                rmse_ftr = mse_ftr ** 0.5
                mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
                mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

                final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
                mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
                rmse = mse ** 0.5
                mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                # Generate prediction accuracy
                actual = df['close'].tail(test_len).values
                result_1 = []
                result_2 = []
                for i in range(1, len(final_prediction)):
                    # Compare prediction to previous close price
                    if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                        result_1.append(1)
                    elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                        result_1.append(1)
                    else:
                        result_1.append(0)

                    # Compare prediction to previous prediction
                    if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                        result_2.append(1)
                    elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                        result_2.append(1)
                    else:
                        result_2.append(0)

                accuracy_1 = np.mean(result_1)
                accuracy_2 = np.mean(result_2)

                simulation7[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                              'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                  'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                              'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                  'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                              'rmse': rmse_ftr, 'mae' : mae_ftr},
                                  'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                            'rmse': rmse, 'mae': mae },
                                  'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

                # save simulation data here as checkpoint
                with open('simulation7_data.json', 'w') as fp:
                    json.dump(simulation7, fp)

                for ma in simulation7.keys():
                    print('\n' + ma)
                    print('Prediction vs Close:\t\t' + str(round(100*simulation7[ma]['accuracy']['prediction vs close'], 2))
                          + '% Accuracy')
                    print('Prediction vs Prediction:\t' + str(round(100*simulation7[ma]['accuracy']['prediction vs prediction'], 2))
                          + '% Accuracy')
                    print('MSE:\t', simulation7[ma]['final']['mse'],
                          '\nRMSE:\t', simulation7[ma]['final']['rmse'],
                          '\nMAPE:\t', simulation7[ma]['final']['mae'])#,
                          # '\nMAPE:\t', simulation[ma]['final']['mape'])
              # else:
              #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: min:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-15057.252, Time=5.47 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-13616.841, Time=2.96 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15177.809, Time=11.08 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14725.568, Time=12.35 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-15511.840, Time=16.78 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-15663.563, Time=16.92 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-15093.498, Time=7.92 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15194.504, Time=11.57 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=-14885.340, Time=21.37 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 106.454 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood                7855.782
Date:                Sun, 12 Dec 2021   AIC                         -15663.563
Time:                        22:39:18   BIC                         -15550.983
Sample:                             0   HQIC                        -15620.328
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -1.202e-05   4.78e-05     -0.251      0.801      -0.000    8.17e-05
x2         -1.202e-05   2.63e-05     -0.458      0.647   -6.35e-05    3.95e-05
x3          -1.21e-05      0.000     -0.118      0.906      -0.000       0.000
x4             1.0000   3.59e-05   2.79e+04      0.000       1.000       1.000
x5         -1.149e-05   3.47e-05     -0.332      0.740   -7.94e-05    5.65e-05
x6         -1.354e-05   2.94e-05     -0.461      0.645   -7.11e-05     4.4e-05
x7         -1.198e-05   3.25e-06     -3.693      0.000   -1.83e-05   -5.62e-06
x8             0.0027   9.17e-06    293.847      0.000       0.003       0.003
x9         -8.458e-07      0.000     -0.006      0.995      -0.000       0.000
x10            0.0005      0.000      1.213      0.225      -0.000       0.001
x11           -0.0027   4.93e-05    -54.454      0.000      -0.003      -0.003
x12            0.0007   3.53e-05     19.122      0.000       0.001       0.001
x13        -1.207e-05   2.16e-05     -0.559      0.576   -5.44e-05    3.03e-05
x14        -3.571e-05   1.38e-05     -2.581      0.010   -6.28e-05   -8.59e-06
x15        -1.308e-05   2.71e-06     -4.820      0.000   -1.84e-05   -7.76e-06
x16         -1.12e-05   4.71e-05     -0.238      0.812      -0.000    8.11e-05
x17        -1.059e-05   1.48e-05     -0.715      0.474   -3.96e-05    1.84e-05
x18         -2.03e-05   5.97e-05     -0.340      0.734      -0.000    9.68e-05
x19        -1.389e-05   3.69e-05     -0.376      0.707   -8.63e-05    5.85e-05
x20         2.105e-05      0.000      0.107      0.915      -0.000       0.000
ar.L1         -1.1996   4.09e-05  -2.93e+04      0.000      -1.200      -1.200
ar.L2         -0.8995   1.54e-05  -5.82e+04      0.000      -0.900      -0.899
ar.L3         -0.3999   1.46e-05  -2.74e+04      0.000      -0.400      -0.400
sigma2      2.425e-10   7.55e-11      3.213      0.001    9.46e-11     3.9e-10
===================================================================================
Ljung-Box (L1) (Q):                  14.46   Jarque-Bera (JB):           2454147.19
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            -3.95
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.38
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.88e+20. Standard errors may be unstable.
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05693, saving model to LSTM7.h5
48/48 - 3s - loss: 0.1482 - mse: 0.1482 - mae: 0.2803 - val_loss: 0.0569 - val_mse: 0.0569 - val_mae: 0.2203 - lr: 0.0010 - 3s/epoch - 66ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.05693 to 0.02302, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0268 - mse: 0.0268 - mae: 0.1314 - val_loss: 0.0230 - val_mse: 0.0230 - val_mae: 0.1301 - lr: 0.0010 - 296ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.02302 to 0.01248, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0184 - mse: 0.0184 - mae: 0.1084 - val_loss: 0.0125 - val_mse: 0.0125 - val_mae: 0.0898 - lr: 0.0010 - 286ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.01248 to 0.00900, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0156 - mse: 0.0156 - mae: 0.1006 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0747 - lr: 0.0010 - 332ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00900
48/48 - 0s - loss: 0.0146 - mse: 0.0146 - mae: 0.0979 - val_loss: 0.0100 - val_mse: 0.0100 - val_mae: 0.0789 - lr: 0.0010 - 305ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00900
48/48 - 0s - loss: 0.0144 - mse: 0.0144 - mae: 0.0957 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0845 - lr: 0.0010 - 279ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00900
48/48 - 0s - loss: 0.0129 - mse: 0.0129 - mae: 0.0896 - val_loss: 0.0136 - val_mse: 0.0136 - val_mae: 0.0937 - lr: 0.0010 - 304ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00900
48/48 - 0s - loss: 0.0126 - mse: 0.0126 - mae: 0.0892 - val_loss: 0.0170 - val_mse: 0.0170 - val_mae: 0.1067 - lr: 0.0010 - 282ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00009: val_loss did not improve from 0.00900
48/48 - 0s - loss: 0.0121 - mse: 0.0121 - mae: 0.0879 - val_loss: 0.0165 - val_mse: 0.0165 - val_mae: 0.1046 - lr: 0.0010 - 279ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.00900 to 0.00852, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0221 - mse: 0.0221 - mae: 0.1204 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0727 - lr: 1.0000e-04 - 291ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.00852 to 0.00772, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0736 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0694 - lr: 1.0000e-04 - 328ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0698 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0703 - lr: 1.0000e-04 - 295ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0704 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0728 - lr: 1.0000e-04 - 272ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0641 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0756 - lr: 1.0000e-04 - 312ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0655 - val_loss: 0.0096 - val_mse: 0.0096 - val_mae: 0.0770 - lr: 1.0000e-04 - 310ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00016: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0645 - val_loss: 0.0100 - val_mse: 0.0100 - val_mae: 0.0787 - lr: 1.0000e-04 - 277ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0618 - val_loss: 0.0099 - val_mse: 0.0099 - val_mae: 0.0785 - lr: 1.0000e-05 - 296ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0613 - val_loss: 0.0099 - val_mse: 0.0099 - val_mae: 0.0786 - lr: 1.0000e-05 - 289ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0621 - val_loss: 0.0099 - val_mse: 0.0099 - val_mae: 0.0784 - lr: 1.0000e-05 - 298ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0645 - val_loss: 0.0099 - val_mse: 0.0099 - val_mae: 0.0785 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00021: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0624 - val_loss: 0.0099 - val_mse: 0.0099 - val_mae: 0.0786 - lr: 1.0000e-05 - 315ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0631 - val_loss: 0.0100 - val_mse: 0.0100 - val_mae: 0.0787 - lr: 1.0000e-05 - 334ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0607 - val_loss: 0.0100 - val_mse: 0.0100 - val_mae: 0.0788 - lr: 1.0000e-05 - 301ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0634 - val_loss: 0.0100 - val_mse: 0.0100 - val_mae: 0.0790 - lr: 1.0000e-05 - 311ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0614 - val_loss: 0.0101 - val_mse: 0.0101 - val_mae: 0.0791 - lr: 1.0000e-05 - 300ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0619 - val_loss: 0.0101 - val_mse: 0.0101 - val_mae: 0.0791 - lr: 1.0000e-05 - 286ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0645 - val_loss: 0.0101 - val_mse: 0.0101 - val_mae: 0.0794 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0610 - val_loss: 0.0102 - val_mse: 0.0102 - val_mae: 0.0796 - lr: 1.0000e-05 - 286ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0651 - val_loss: 0.0102 - val_mse: 0.0102 - val_mae: 0.0798 - lr: 1.0000e-05 - 288ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0623 - val_loss: 0.0103 - val_mse: 0.0103 - val_mae: 0.0800 - lr: 1.0000e-05 - 281ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0578 - val_loss: 0.0104 - val_mse: 0.0104 - val_mae: 0.0804 - lr: 1.0000e-05 - 300ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0627 - val_loss: 0.0104 - val_mse: 0.0104 - val_mae: 0.0805 - lr: 1.0000e-05 - 302ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0615 - val_loss: 0.0104 - val_mse: 0.0104 - val_mae: 0.0806 - lr: 1.0000e-05 - 292ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0612 - val_loss: 0.0104 - val_mse: 0.0104 - val_mae: 0.0807 - lr: 1.0000e-05 - 297ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0632 - val_loss: 0.0105 - val_mse: 0.0105 - val_mae: 0.0808 - lr: 1.0000e-05 - 310ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0617 - val_loss: 0.0106 - val_mse: 0.0106 - val_mae: 0.0811 - lr: 1.0000e-05 - 304ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0606 - val_loss: 0.0106 - val_mse: 0.0106 - val_mae: 0.0815 - lr: 1.0000e-05 - 288ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0594 - val_loss: 0.0108 - val_mse: 0.0108 - val_mae: 0.0819 - lr: 1.0000e-05 - 291ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0621 - val_loss: 0.0108 - val_mse: 0.0108 - val_mae: 0.0820 - lr: 1.0000e-05 - 280ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0631 - val_loss: 0.0108 - val_mse: 0.0108 - val_mae: 0.0823 - lr: 1.0000e-05 - 316ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0622 - val_loss: 0.0109 - val_mse: 0.0109 - val_mae: 0.0826 - lr: 1.0000e-05 - 296ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0627 - val_loss: 0.0110 - val_mse: 0.0110 - val_mae: 0.0829 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0619 - val_loss: 0.0110 - val_mse: 0.0110 - val_mae: 0.0828 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0601 - val_loss: 0.0110 - val_mse: 0.0110 - val_mae: 0.0831 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0592 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0835 - lr: 1.0000e-05 - 294ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0578 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0840 - lr: 1.0000e-05 - 308ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0625 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0841 - lr: 1.0000e-05 - 291ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0626 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0842 - lr: 1.0000e-05 - 297ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0598 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0845 - lr: 1.0000e-05 - 297ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0614 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0850 - lr: 1.0000e-05 - 294ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0632 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0854 - lr: 1.0000e-05 - 313ms/epoch - 7ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0583 - val_loss: 0.0118 - val_mse: 0.0118 - val_mae: 0.0859 - lr: 1.0000e-05 - 297ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0608 - val_loss: 0.0118 - val_mse: 0.0118 - val_mae: 0.0860 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0618 - val_loss: 0.0118 - val_mse: 0.0118 - val_mae: 0.0862 - lr: 1.0000e-05 - 295ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0593 - val_loss: 0.0120 - val_mse: 0.0120 - val_mae: 0.0869 - lr: 1.0000e-05 - 298ms/epoch - 6ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0606 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0874 - lr: 1.0000e-05 - 289ms/epoch - 6ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0621 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0880 - lr: 1.0000e-05 - 285ms/epoch - 6ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0592 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0884 - lr: 1.0000e-05 - 301ms/epoch - 6ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0595 - val_loss: 0.0125 - val_mse: 0.0125 - val_mae: 0.0886 - lr: 1.0000e-05 - 285ms/epoch - 6ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0591 - val_loss: 0.0126 - val_mse: 0.0126 - val_mae: 0.0891 - lr: 1.0000e-05 - 297ms/epoch - 6ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00772
48/48 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0575 - val_loss: 0.0127 - val_mse: 0.0127 - val_mae: 0.0893 - lr: 1.0000e-05 - 295ms/epoch - 6ms/step
Epoch 00061: early stopping
SMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 52.20083682521797 
RMSE:	 7.225014659169763 
MAPE:	 5.885308101636885
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17007.807, Time=3.35 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14576.593, Time=5.19 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15585.734, Time=9.76 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14574.593, Time=7.84 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15458.426, Time=11.72 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15621.247, Time=13.61 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17231.605, Time=22.31 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14570.593, Time=10.58 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-16761.093, Time=18.01 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-13173.936, Time=34.73 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 137.133 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8638.803
Date:                Sun, 12 Dec 2021   AIC                         -17231.605
Time:                        22:44:44   BIC                         -17123.716
Sample:                             0   HQIC                        -17190.171
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -5.101e-09   4.36e-05     -0.000      1.000   -8.54e-05    8.54e-05
x2         -5.085e-09   4.35e-05     -0.000      1.000   -8.53e-05    8.53e-05
x3          -5.12e-09   4.36e-05     -0.000      1.000   -8.56e-05    8.55e-05
x4             1.0000   4.36e-05   2.29e+04      0.000       1.000       1.000
x5         -4.635e-09   4.15e-05     -0.000      1.000   -8.14e-05    8.14e-05
x6         -1.766e-08   7.54e-05     -0.000      1.000      -0.000       0.000
x7         -5.054e-09   4.34e-05     -0.000      1.000    -8.5e-05     8.5e-05
x8         -4.941e-09   4.29e-05     -0.000      1.000   -8.41e-05    8.41e-05
x9         -3.138e-10   8.71e-06   -3.6e-05      1.000   -1.71e-05    1.71e-05
x10        -1.002e-09   1.85e-05  -5.41e-05      1.000   -3.63e-05    3.63e-05
x11        -4.879e-09   4.26e-05     -0.000      1.000   -8.36e-05    8.36e-05
x12        -4.991e-09   4.31e-05     -0.000      1.000   -8.46e-05    8.45e-05
x13        -5.099e-09   4.36e-05     -0.000      1.000   -8.54e-05    8.54e-05
x14        -3.925e-08      0.000     -0.000      1.000      -0.000       0.000
x15        -4.597e-09   4.13e-05     -0.000      1.000    -8.1e-05     8.1e-05
x16        -1.164e-08    6.6e-05     -0.000      1.000      -0.000       0.000
x17        -4.702e-09   4.19e-05     -0.000      1.000   -8.22e-05    8.22e-05
x18        -8.297e-10   1.65e-05  -5.02e-05      1.000   -3.24e-05    3.24e-05
x19        -5.725e-09   4.61e-05     -0.000      1.000   -9.04e-05    9.04e-05
x20        -5.511e-09   4.28e-05     -0.000      1.000    -8.4e-05    8.39e-05
ma.L1         -1.3891   1.96e-08  -7.08e+07      0.000      -1.389      -1.389
ma.L2          0.4027   2.02e-08   1.99e+07      0.000       0.403       0.403
sigma2      7.547e-11   6.92e-11      1.091      0.275   -6.01e-11    2.11e-10
===================================================================================
Ljung-Box (L1) (Q):                  67.97   Jarque-Bera (JB):           6306943.47
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            12.31
Prob(H) (two-sided):                  0.00   Kurtosis:                       435.93
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.3e+24. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05344, saving model to LSTM7.h5
16/16 - 3s - loss: 0.5464 - mse: 0.5464 - mae: 0.5958 - val_loss: 0.0534 - val_mse: 0.0534 - val_mae: 0.1986 - lr: 0.0010 - 3s/epoch - 182ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.05344
16/16 - 0s - loss: 0.0748 - mse: 0.0748 - mae: 0.2333 - val_loss: 0.0789 - val_mse: 0.0789 - val_mae: 0.2555 - lr: 0.0010 - 117ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.05344 to 0.05059, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0579 - mse: 0.0579 - mae: 0.2016 - val_loss: 0.0506 - val_mse: 0.0506 - val_mae: 0.1944 - lr: 0.0010 - 147ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.05059 to 0.03881, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0306 - mse: 0.0306 - mae: 0.1403 - val_loss: 0.0388 - val_mse: 0.0388 - val_mae: 0.1665 - lr: 0.0010 - 137ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.03881
16/16 - 0s - loss: 0.0253 - mse: 0.0253 - mae: 0.1265 - val_loss: 0.0401 - val_mse: 0.0401 - val_mae: 0.1697 - lr: 0.0010 - 109ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.03881 to 0.03795, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0212 - mse: 0.0212 - mae: 0.1153 - val_loss: 0.0379 - val_mse: 0.0379 - val_mae: 0.1647 - lr: 0.0010 - 146ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.03795 to 0.03671, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0188 - mse: 0.0188 - mae: 0.1091 - val_loss: 0.0367 - val_mse: 0.0367 - val_mae: 0.1618 - lr: 0.0010 - 134ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.03671 to 0.03365, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0167 - mse: 0.0167 - mae: 0.1016 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1544 - lr: 0.0010 - 194ms/epoch - 12ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.03365 to 0.03303, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0163 - mse: 0.0163 - mae: 0.1005 - val_loss: 0.0330 - val_mse: 0.0330 - val_mae: 0.1530 - lr: 0.0010 - 137ms/epoch - 9ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.03303 to 0.03180, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0155 - mse: 0.0155 - mae: 0.0971 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1502 - lr: 0.0010 - 139ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.03180 to 0.03037, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0120 - mse: 0.0120 - mae: 0.0870 - val_loss: 0.0304 - val_mse: 0.0304 - val_mae: 0.1470 - lr: 0.0010 - 135ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.03037 to 0.02927, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0131 - mse: 0.0131 - mae: 0.0901 - val_loss: 0.0293 - val_mse: 0.0293 - val_mae: 0.1445 - lr: 0.0010 - 135ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.02927 to 0.02823, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0128 - mse: 0.0128 - mae: 0.0898 - val_loss: 0.0282 - val_mse: 0.0282 - val_mae: 0.1421 - lr: 0.0010 - 146ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.02823 to 0.02758, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0116 - mse: 0.0116 - mae: 0.0843 - val_loss: 0.0276 - val_mse: 0.0276 - val_mae: 0.1406 - lr: 0.0010 - 143ms/epoch - 9ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.02758 to 0.02695, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0791 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1392 - lr: 0.0010 - 156ms/epoch - 10ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.02695 to 0.02675, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0104 - mse: 0.0104 - mae: 0.0793 - val_loss: 0.0267 - val_mse: 0.0267 - val_mae: 0.1389 - lr: 0.0010 - 139ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.02675 to 0.02617, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0106 - mse: 0.0106 - mae: 0.0794 - val_loss: 0.0262 - val_mse: 0.0262 - val_mae: 0.1374 - lr: 0.0010 - 167ms/epoch - 10ms/step
Epoch 18/500

Epoch 00018: val_loss improved from 0.02617 to 0.02421, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0749 - val_loss: 0.0242 - val_mse: 0.0242 - val_mae: 0.1323 - lr: 0.0010 - 138ms/epoch - 9ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.02421 to 0.02320, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0097 - mse: 0.0097 - mae: 0.0777 - val_loss: 0.0232 - val_mse: 0.0232 - val_mae: 0.1297 - lr: 0.0010 - 144ms/epoch - 9ms/step
Epoch 20/500

Epoch 00020: val_loss improved from 0.02320 to 0.02097, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0730 - val_loss: 0.0210 - val_mse: 0.0210 - val_mae: 0.1237 - lr: 0.0010 - 136ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: val_loss improved from 0.02097 to 0.01974, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0796 - val_loss: 0.0197 - val_mse: 0.0197 - val_mae: 0.1204 - lr: 0.0010 - 147ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss improved from 0.01974 to 0.01844, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0736 - val_loss: 0.0184 - val_mse: 0.0184 - val_mae: 0.1168 - lr: 0.0010 - 142ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss improved from 0.01844 to 0.01779, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0723 - val_loss: 0.0178 - val_mse: 0.0178 - val_mae: 0.1150 - lr: 0.0010 - 139ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss improved from 0.01779 to 0.01697, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0676 - val_loss: 0.0170 - val_mse: 0.0170 - val_mae: 0.1126 - lr: 0.0010 - 134ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss improved from 0.01697 to 0.01621, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0728 - val_loss: 0.0162 - val_mse: 0.0162 - val_mae: 0.1103 - lr: 0.0010 - 174ms/epoch - 11ms/step
Epoch 26/500

Epoch 00026: val_loss improved from 0.01621 to 0.01597, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0705 - val_loss: 0.0160 - val_mse: 0.0160 - val_mae: 0.1095 - lr: 0.0010 - 153ms/epoch - 10ms/step
Epoch 27/500

Epoch 00027: val_loss improved from 0.01597 to 0.01543, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0683 - val_loss: 0.0154 - val_mse: 0.0154 - val_mae: 0.1074 - lr: 0.0010 - 141ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: val_loss improved from 0.01543 to 0.01516, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0684 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1065 - lr: 0.0010 - 152ms/epoch - 10ms/step
Epoch 29/500

Epoch 00029: val_loss improved from 0.01516 to 0.01501, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0658 - val_loss: 0.0150 - val_mse: 0.0150 - val_mae: 0.1054 - lr: 0.0010 - 140ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: val_loss improved from 0.01501 to 0.01470, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0671 - val_loss: 0.0147 - val_mse: 0.0147 - val_mae: 0.1044 - lr: 0.0010 - 155ms/epoch - 10ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0661 - val_loss: 0.0150 - val_mse: 0.0150 - val_mae: 0.1036 - lr: 0.0010 - 118ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0652 - val_loss: 0.0150 - val_mse: 0.0150 - val_mae: 0.1029 - lr: 0.0010 - 113ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0641 - val_loss: 0.0155 - val_mse: 0.0155 - val_mae: 0.1028 - lr: 0.0010 - 130ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0665 - val_loss: 0.0153 - val_mse: 0.0153 - val_mae: 0.1019 - lr: 0.0010 - 127ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00035: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0632 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1015 - lr: 0.0010 - 124ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0598 - val_loss: 0.0151 - val_mse: 0.0151 - val_mae: 0.1014 - lr: 1.0000e-04 - 119ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0576 - val_loss: 0.0151 - val_mse: 0.0151 - val_mae: 0.1013 - lr: 1.0000e-04 - 114ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0601 - val_loss: 0.0151 - val_mse: 0.0151 - val_mae: 0.1012 - lr: 1.0000e-04 - 110ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0618 - val_loss: 0.0151 - val_mse: 0.0151 - val_mae: 0.1011 - lr: 1.0000e-04 - 117ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00040: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0611 - val_loss: 0.0151 - val_mse: 0.0151 - val_mae: 0.1011 - lr: 1.0000e-04 - 110ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0594 - val_loss: 0.0151 - val_mse: 0.0151 - val_mae: 0.1011 - lr: 1.0000e-05 - 124ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0600 - val_loss: 0.0151 - val_mse: 0.0151 - val_mae: 0.1011 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0596 - val_loss: 0.0151 - val_mse: 0.0151 - val_mae: 0.1011 - lr: 1.0000e-05 - 127ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0601 - val_loss: 0.0151 - val_mse: 0.0151 - val_mae: 0.1011 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00045: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0564 - val_loss: 0.0151 - val_mse: 0.0151 - val_mae: 0.1011 - lr: 1.0000e-05 - 123ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0610 - val_loss: 0.0151 - val_mse: 0.0151 - val_mae: 0.1011 - lr: 1.0000e-05 - 164ms/epoch - 10ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0574 - val_loss: 0.0151 - val_mse: 0.0151 - val_mae: 0.1011 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0607 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1011 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0592 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1011 - lr: 1.0000e-05 - 124ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0609 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1011 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0579 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1011 - lr: 1.0000e-05 - 125ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0607 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1011 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0598 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1011 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0593 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1011 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0612 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1011 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0572 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1010 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0589 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1010 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0604 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1010 - lr: 1.0000e-05 - 136ms/epoch - 8ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0618 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1010 - lr: 1.0000e-05 - 120ms/epoch - 7ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0590 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1010 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0564 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1010 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0631 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1010 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0588 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1010 - lr: 1.0000e-05 - 121ms/epoch - 8ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0628 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1010 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0585 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.1010 - lr: 1.0000e-05 - 125ms/epoch - 8ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0594 - val_loss: 0.0153 - val_mse: 0.0153 - val_mae: 0.1010 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0632 - val_loss: 0.0153 - val_mse: 0.0153 - val_mae: 0.1010 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0599 - val_loss: 0.0153 - val_mse: 0.0153 - val_mae: 0.1010 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0632 - val_loss: 0.0153 - val_mse: 0.0153 - val_mae: 0.1011 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0591 - val_loss: 0.0153 - val_mse: 0.0153 - val_mae: 0.1011 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0595 - val_loss: 0.0153 - val_mse: 0.0153 - val_mae: 0.1011 - lr: 1.0000e-05 - 125ms/epoch - 8ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0592 - val_loss: 0.0153 - val_mse: 0.0153 - val_mae: 0.1011 - lr: 1.0000e-05 - 125ms/epoch - 8ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0582 - val_loss: 0.0153 - val_mse: 0.0153 - val_mae: 0.1011 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0596 - val_loss: 0.0154 - val_mse: 0.0154 - val_mae: 0.1011 - lr: 1.0000e-05 - 176ms/epoch - 11ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0622 - val_loss: 0.0154 - val_mse: 0.0154 - val_mae: 0.1011 - lr: 1.0000e-05 - 121ms/epoch - 8ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0596 - val_loss: 0.0154 - val_mse: 0.0154 - val_mae: 0.1011 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0606 - val_loss: 0.0154 - val_mse: 0.0154 - val_mae: 0.1011 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0606 - val_loss: 0.0154 - val_mse: 0.0154 - val_mae: 0.1011 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0617 - val_loss: 0.0154 - val_mse: 0.0154 - val_mae: 0.1010 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 80/500

Epoch 00080: val_loss did not improve from 0.01470
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0599 - val_loss: 0.0154 - val_mse: 0.0154 - val_mae: 0.1011 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 00080: early stopping
SMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 52.20083682521797 
RMSE:	 7.225014659169763 
MAPE:	 5.885308101636885

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 63.36969283161608 
RMSE:	 7.960508327463522 
MAPE:	 6.712143148682283
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-15462.744, Time=15.26 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-13144.103, Time=2.93 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16179.868, Time=7.16 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14670.350, Time=14.58 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-15643.233, Time=21.91 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15673.437, Time=19.12 sec
 ARIMA(1,3,0)(0,0,0)[0] intercept   : AIC=-15494.535, Time=8.38 sec

Best model:  ARIMA(1,3,0)(0,0,0)[0]          
Total fit time: 89.376 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(1, 3, 0)   Log Likelihood                8111.934
Date:                Sun, 12 Dec 2021   AIC                         -16179.868
Time:                        22:52:00   BIC                         -16076.670
Sample:                             0   HQIC                        -16140.236
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -1.474e-05      0.000     -0.048      0.961      -0.001       0.001
x2         -1.471e-05      0.000     -0.041      0.967      -0.001       0.001
x3         -1.475e-05      0.000     -0.072      0.943      -0.000       0.000
x4             1.0000      0.000   3644.383      0.000       0.999       1.001
x5         -1.405e-05      0.000     -0.051      0.960      -0.001       0.001
x6         -2.487e-05   4.39e-05     -0.567      0.571      -0.000    6.11e-05
x7         -1.467e-05      0.000     -0.134      0.893      -0.000       0.000
x8             0.0004      0.000      3.240      0.001       0.000       0.001
x9          3.739e-06      0.001      0.003      0.998      -0.003       0.003
x10           -0.0006      0.001     -0.447      0.655      -0.003       0.002
x11            0.0024   2.31e-05    105.301      0.000       0.002       0.002
x12           -0.0019      0.000     -7.274      0.000      -0.002      -0.001
x13        -1.473e-05      0.000     -0.113      0.910      -0.000       0.000
x14        -4.124e-05      0.000     -0.135      0.893      -0.001       0.001
x15        -1.347e-05      0.000     -0.095      0.924      -0.000       0.000
x16        -2.422e-05      0.000     -0.100      0.920      -0.000       0.000
x17        -1.471e-05      0.000     -0.112      0.911      -0.000       0.000
x18         2.884e-06      0.000      0.006      0.995      -0.001       0.001
x19        -1.493e-05      0.000     -0.105      0.916      -0.000       0.000
x20         3.469e-06      0.000      0.007      0.994      -0.001       0.001
ar.L1         -0.6665   6.84e-05  -9743.045      0.000      -0.667      -0.666
sigma2      1.498e-10   7.34e-11      2.042      0.041    6.03e-12    2.94e-10
===================================================================================
Ljung-Box (L1) (Q):                  89.34   Jarque-Bera (JB):           3270298.31
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.18
Prob(H) (two-sided):                  0.00   Kurtosis:                       315.08
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.61e+19. Standard errors may be unstable.
ARIMA order: (1, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.03969, saving model to LSTM7.h5
17/17 - 3s - loss: 0.1735 - mse: 0.1735 - mae: 0.3175 - val_loss: 0.0397 - val_mse: 0.0397 - val_mae: 0.1558 - lr: 0.0010 - 3s/epoch - 197ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.03969
17/17 - 0s - loss: 0.0597 - mse: 0.0597 - mae: 0.2017 - val_loss: 0.0522 - val_mse: 0.0522 - val_mae: 0.1822 - lr: 0.0010 - 111ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.03969 to 0.02995, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0342 - mse: 0.0342 - mae: 0.1446 - val_loss: 0.0299 - val_mse: 0.0299 - val_mae: 0.1309 - lr: 0.0010 - 131ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.02995 to 0.02040, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0232 - mse: 0.0232 - mae: 0.1194 - val_loss: 0.0204 - val_mse: 0.0204 - val_mae: 0.1114 - lr: 0.0010 - 142ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0171 - mse: 0.0171 - mae: 0.1033 - val_loss: 0.0208 - val_mse: 0.0208 - val_mae: 0.1129 - lr: 0.0010 - 128ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0149 - mse: 0.0149 - mae: 0.0963 - val_loss: 0.0221 - val_mse: 0.0221 - val_mae: 0.1147 - lr: 0.0010 - 131ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0143 - mse: 0.0143 - mae: 0.0938 - val_loss: 0.0229 - val_mse: 0.0229 - val_mae: 0.1161 - lr: 0.0010 - 119ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0125 - mse: 0.0125 - mae: 0.0894 - val_loss: 0.0281 - val_mse: 0.0281 - val_mae: 0.1238 - lr: 0.0010 - 126ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00009: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0122 - mse: 0.0122 - mae: 0.0874 - val_loss: 0.0294 - val_mse: 0.0294 - val_mae: 0.1263 - lr: 0.0010 - 127ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0095 - mse: 0.0095 - mae: 0.0754 - val_loss: 0.0296 - val_mse: 0.0296 - val_mae: 0.1266 - lr: 1.0000e-04 - 121ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0101 - mse: 0.0101 - mae: 0.0783 - val_loss: 0.0294 - val_mse: 0.0294 - val_mae: 0.1263 - lr: 1.0000e-04 - 121ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0100 - mse: 0.0100 - mae: 0.0785 - val_loss: 0.0297 - val_mse: 0.0297 - val_mae: 0.1267 - lr: 1.0000e-04 - 114ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0101 - mse: 0.0101 - mae: 0.0787 - val_loss: 0.0304 - val_mse: 0.0304 - val_mae: 0.1280 - lr: 1.0000e-04 - 128ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00014: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0774 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1292 - lr: 1.0000e-04 - 122ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0095 - mse: 0.0095 - mae: 0.0766 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1294 - lr: 1.0000e-05 - 121ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0796 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1295 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0100 - mse: 0.0100 - mae: 0.0780 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1294 - lr: 1.0000e-05 - 123ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0819 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1295 - lr: 1.0000e-05 - 123ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00019: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0100 - mse: 0.0100 - mae: 0.0785 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1295 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0099 - mse: 0.0099 - mae: 0.0791 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1295 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0779 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1295 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0099 - mse: 0.0099 - mae: 0.0779 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1295 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0097 - mse: 0.0097 - mae: 0.0760 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1294 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0754 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1292 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0810 - val_loss: 0.0309 - val_mse: 0.0309 - val_mae: 0.1291 - lr: 1.0000e-05 - 165ms/epoch - 10ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0103 - mse: 0.0103 - mae: 0.0789 - val_loss: 0.0309 - val_mse: 0.0309 - val_mae: 0.1290 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0097 - mse: 0.0097 - mae: 0.0778 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1293 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0104 - mse: 0.0104 - mae: 0.0802 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1294 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0101 - mse: 0.0101 - mae: 0.0793 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1295 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0104 - mse: 0.0104 - mae: 0.0797 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1295 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0106 - mse: 0.0106 - mae: 0.0805 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1295 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0742 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1296 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0797 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1296 - lr: 1.0000e-05 - 127ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0095 - mse: 0.0095 - mae: 0.0768 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1298 - lr: 1.0000e-05 - 127ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0100 - mse: 0.0100 - mae: 0.0784 - val_loss: 0.0314 - val_mse: 0.0314 - val_mae: 0.1299 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0100 - mse: 0.0100 - mae: 0.0796 - val_loss: 0.0314 - val_mse: 0.0314 - val_mae: 0.1299 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0751 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1298 - lr: 1.0000e-05 - 125ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0817 - val_loss: 0.0314 - val_mse: 0.0314 - val_mae: 0.1298 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0817 - val_loss: 0.0314 - val_mse: 0.0314 - val_mae: 0.1300 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0747 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1302 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0785 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1302 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0752 - val_loss: 0.0317 - val_mse: 0.0317 - val_mae: 0.1305 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0759 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1307 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0099 - mse: 0.0099 - mae: 0.0789 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1307 - lr: 1.0000e-05 - 124ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0099 - mse: 0.0099 - mae: 0.0783 - val_loss: 0.0317 - val_mse: 0.0317 - val_mae: 0.1306 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0104 - mse: 0.0104 - mae: 0.0793 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1302 - lr: 1.0000e-05 - 121ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0096 - mse: 0.0096 - mae: 0.0762 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1301 - lr: 1.0000e-05 - 171ms/epoch - 10ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0086 - mse: 0.0086 - mae: 0.0744 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1304 - lr: 1.0000e-05 - 120ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0772 - val_loss: 0.0317 - val_mse: 0.0317 - val_mae: 0.1305 - lr: 1.0000e-05 - 125ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0799 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1304 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0096 - mse: 0.0096 - mae: 0.0770 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1302 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0094 - mse: 0.0094 - mae: 0.0764 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1302 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0096 - mse: 0.0096 - mae: 0.0769 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1303 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.02040
17/17 - 0s - loss: 0.0097 - mse: 0.0097 - mae: 0.0786 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1303 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 00054: early stopping
SMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 52.20083682521797 
RMSE:	 7.225014659169763 
MAPE:	 5.885308101636885

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 63.36969283161608 
RMSE:	 7.960508327463522 
MAPE:	 6.712143148682283

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	41.04% Accuracy
MSE:	 71.61670961743842 
RMSE:	 8.462665633087392 
MAPE:	 6.541482431642796
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17007.773, Time=3.30 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14576.593, Time=5.09 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16293.727, Time=8.63 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14574.593, Time=8.21 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16647.994, Time=10.32 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15621.952, Time=11.18 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16876.201, Time=12.12 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17032.019, Time=6.79 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17006.612, Time=3.61 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17089.440, Time=7.70 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=17.54 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17005.977, Time=4.14 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-17000.665, Time=4.95 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 103.606 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.720
Date:                Sun, 12 Dec 2021   AIC                         -17089.440
Time:                        22:54:50   BIC                         -16972.169
Sample:                             0   HQIC                        -17044.403
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.799e-10   1.36e-20  -2.06e+10      0.000    -2.8e-10    -2.8e-10
x2         -2.816e-10   1.37e-20  -2.06e+10      0.000   -2.82e-10   -2.82e-10
x3         -2.804e-10   1.36e-20  -2.06e+10      0.000    -2.8e-10    -2.8e-10
x4             1.0000   1.36e-20   7.33e+19      0.000       1.000       1.000
x5         -2.598e-10   1.31e-20  -1.98e+10      0.000    -2.6e-10    -2.6e-10
x6         -1.388e-09   2.97e-20  -4.67e+10      0.000   -1.39e-09   -1.39e-09
x7         -2.788e-10   1.36e-20  -2.05e+10      0.000   -2.79e-10   -2.79e-10
x8         -2.761e-10   1.35e-20  -2.04e+10      0.000   -2.76e-10   -2.76e-10
x9          -2.22e-12   3.36e-22  -6.61e+09      0.000   -2.22e-12   -2.22e-12
x10        -1.345e-10   9.36e-21  -1.44e+10      0.000   -1.34e-10   -1.34e-10
x11        -2.898e-10   1.38e-20  -2.09e+10      0.000    -2.9e-10    -2.9e-10
x12        -2.602e-10   1.31e-20  -1.98e+10      0.000    -2.6e-10    -2.6e-10
x13        -2.807e-10   1.36e-20  -2.06e+10      0.000   -2.81e-10   -2.81e-10
x14         -1.87e-09   3.52e-20  -5.31e+10      0.000   -1.87e-09   -1.87e-09
x15        -2.767e-10   1.37e-20  -2.03e+10      0.000   -2.77e-10   -2.77e-10
x16        -8.184e-11   7.33e-21  -1.12e+10      0.000   -8.18e-11   -8.18e-11
x17        -2.407e-10   1.27e-20   -1.9e+10      0.000   -2.41e-10   -2.41e-10
x18        -6.412e-10   2.06e-20  -3.11e+10      0.000   -6.41e-10   -6.41e-10
x19        -2.915e-10   1.39e-20   -2.1e+10      0.000   -2.92e-10   -2.92e-10
x20        -4.337e-10   1.69e-20  -2.56e+10      0.000   -4.34e-10   -4.34e-10
ar.L1         -0.4924   1.46e-22  -3.38e+21      0.000      -0.492      -0.492
ar.L2         -0.1923   8.47e-23  -2.27e+21      0.000      -0.192      -0.192
ar.L3         -0.0461   4.02e-23  -1.15e+21      0.000      -0.046      -0.046
ma.L1         -0.7078   3.31e-22  -2.14e+21      0.000      -0.708      -0.708
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  55.12   Jarque-Bera (JB):           4171061.36
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.27
Prob(H) (two-sided):                  0.00   Kurtosis:                       355.48
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.88e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 1.35275, saving model to LSTM7.h5
10/10 - 3s - loss: 1.9288 - mse: 1.9288 - mae: 1.2571 - val_loss: 1.3528 - val_mse: 1.3528 - val_mae: 1.1188 - lr: 0.0010 - 3s/epoch - 291ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 1.35275 to 0.64177, saving model to LSTM7.h5
10/10 - 0s - loss: 0.5197 - mse: 0.5197 - mae: 0.6231 - val_loss: 0.6418 - val_mse: 0.6418 - val_mae: 0.7656 - lr: 0.0010 - 92ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.64177 to 0.30853, saving model to LSTM7.h5
10/10 - 0s - loss: 0.1071 - mse: 0.1071 - mae: 0.2646 - val_loss: 0.3085 - val_mse: 0.3085 - val_mae: 0.5240 - lr: 0.0010 - 107ms/epoch - 11ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.30853 to 0.19601, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0925 - mse: 0.0925 - mae: 0.2548 - val_loss: 0.1960 - val_mse: 0.1960 - val_mae: 0.4108 - lr: 0.0010 - 125ms/epoch - 12ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.19601 to 0.15760, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0757 - mse: 0.0757 - mae: 0.2332 - val_loss: 0.1576 - val_mse: 0.1576 - val_mae: 0.3636 - lr: 0.0010 - 101ms/epoch - 10ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.15760 to 0.13422, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0458 - mse: 0.0458 - mae: 0.1700 - val_loss: 0.1342 - val_mse: 0.1342 - val_mae: 0.3311 - lr: 0.0010 - 100ms/epoch - 10ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.13422 to 0.10393, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0411 - mse: 0.0411 - mae: 0.1601 - val_loss: 0.1039 - val_mse: 0.1039 - val_mae: 0.2857 - lr: 0.0010 - 113ms/epoch - 11ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.10393 to 0.07498, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0325 - mse: 0.0325 - mae: 0.1421 - val_loss: 0.0750 - val_mse: 0.0750 - val_mae: 0.2370 - lr: 0.0010 - 105ms/epoch - 11ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.07498 to 0.05407, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0272 - mse: 0.0272 - mae: 0.1299 - val_loss: 0.0541 - val_mse: 0.0541 - val_mae: 0.1980 - lr: 0.0010 - 110ms/epoch - 11ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.05407 to 0.04324, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0250 - mse: 0.0250 - mae: 0.1245 - val_loss: 0.0432 - val_mse: 0.0432 - val_mae: 0.1754 - lr: 0.0010 - 103ms/epoch - 10ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.04324 to 0.03748, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0212 - mse: 0.0212 - mae: 0.1141 - val_loss: 0.0375 - val_mse: 0.0375 - val_mae: 0.1626 - lr: 0.0010 - 116ms/epoch - 12ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.03748 to 0.03116, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0193 - mse: 0.0193 - mae: 0.1085 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1480 - lr: 0.0010 - 111ms/epoch - 11ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.03116 to 0.02450, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0176 - mse: 0.0176 - mae: 0.1045 - val_loss: 0.0245 - val_mse: 0.0245 - val_mae: 0.1308 - lr: 0.0010 - 102ms/epoch - 10ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.02450 to 0.02174, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0156 - mse: 0.0156 - mae: 0.0994 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1231 - lr: 0.0010 - 104ms/epoch - 10ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.02174 to 0.02072, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0132 - mse: 0.0132 - mae: 0.0921 - val_loss: 0.0207 - val_mse: 0.0207 - val_mae: 0.1200 - lr: 0.0010 - 92ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.02072
10/10 - 0s - loss: 0.0146 - mse: 0.0146 - mae: 0.0939 - val_loss: 0.0213 - val_mse: 0.0213 - val_mae: 0.1213 - lr: 0.0010 - 86ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.02072 to 0.01947, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0162 - mse: 0.0162 - mae: 0.0992 - val_loss: 0.0195 - val_mse: 0.0195 - val_mae: 0.1162 - lr: 0.0010 - 111ms/epoch - 11ms/step
Epoch 18/500

Epoch 00018: val_loss improved from 0.01947 to 0.01923, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0129 - mse: 0.0129 - mae: 0.0883 - val_loss: 0.0192 - val_mse: 0.0192 - val_mae: 0.1154 - lr: 0.0010 - 112ms/epoch - 11ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0131 - mse: 0.0131 - mae: 0.0889 - val_loss: 0.0195 - val_mse: 0.0195 - val_mae: 0.1160 - lr: 0.0010 - 102ms/epoch - 10ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0119 - mse: 0.0119 - mae: 0.0863 - val_loss: 0.0194 - val_mse: 0.0194 - val_mae: 0.1157 - lr: 0.0010 - 90ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0117 - mse: 0.0117 - mae: 0.0845 - val_loss: 0.0204 - val_mse: 0.0204 - val_mae: 0.1182 - lr: 0.0010 - 89ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0129 - mse: 0.0129 - mae: 0.0847 - val_loss: 0.0213 - val_mse: 0.0213 - val_mae: 0.1202 - lr: 0.0010 - 82ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00023: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0117 - mse: 0.0117 - mae: 0.0843 - val_loss: 0.0203 - val_mse: 0.0203 - val_mae: 0.1176 - lr: 0.0010 - 81ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0120 - mse: 0.0120 - mae: 0.0853 - val_loss: 0.0205 - val_mse: 0.0205 - val_mae: 0.1180 - lr: 1.0000e-04 - 88ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0109 - mse: 0.0109 - mae: 0.0805 - val_loss: 0.0207 - val_mse: 0.0207 - val_mae: 0.1186 - lr: 1.0000e-04 - 76ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0804 - val_loss: 0.0209 - val_mse: 0.0209 - val_mae: 0.1192 - lr: 1.0000e-04 - 80ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0763 - val_loss: 0.0212 - val_mse: 0.0212 - val_mae: 0.1198 - lr: 1.0000e-04 - 91ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00028: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0813 - val_loss: 0.0215 - val_mse: 0.0215 - val_mae: 0.1204 - lr: 1.0000e-04 - 98ms/epoch - 10ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0786 - val_loss: 0.0215 - val_mse: 0.0215 - val_mae: 0.1205 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0106 - mse: 0.0106 - mae: 0.0794 - val_loss: 0.0215 - val_mse: 0.0215 - val_mae: 0.1205 - lr: 1.0000e-05 - 99ms/epoch - 10ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0097 - mse: 0.0097 - mae: 0.0775 - val_loss: 0.0215 - val_mse: 0.0215 - val_mae: 0.1205 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0109 - mse: 0.0109 - mae: 0.0809 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1206 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00033: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0096 - mse: 0.0096 - mae: 0.0761 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1206 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0801 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1207 - lr: 1.0000e-05 - 84ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0099 - mse: 0.0099 - mae: 0.0786 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1207 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0096 - mse: 0.0096 - mae: 0.0756 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1207 - lr: 1.0000e-05 - 84ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0791 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1207 - lr: 1.0000e-05 - 82ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0777 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1207 - lr: 1.0000e-05 - 81ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0108 - mse: 0.0108 - mae: 0.0800 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1207 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0812 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1206 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0110 - mse: 0.0110 - mae: 0.0781 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1206 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0764 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1206 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0104 - mse: 0.0104 - mae: 0.0795 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1206 - lr: 1.0000e-05 - 85ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0108 - mse: 0.0108 - mae: 0.0806 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1206 - lr: 1.0000e-05 - 109ms/epoch - 11ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0803 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1206 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0099 - mse: 0.0099 - mae: 0.0776 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1206 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0101 - mse: 0.0101 - mae: 0.0794 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1206 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0114 - mse: 0.0114 - mae: 0.0816 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1207 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0104 - mse: 0.0104 - mae: 0.0787 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1207 - lr: 1.0000e-05 - 97ms/epoch - 10ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0097 - mse: 0.0097 - mae: 0.0783 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1208 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0113 - mse: 0.0113 - mae: 0.0816 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1209 - lr: 1.0000e-05 - 85ms/epoch - 9ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0094 - mse: 0.0094 - mae: 0.0770 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1210 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0103 - mse: 0.0103 - mae: 0.0790 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1211 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0104 - mse: 0.0104 - mae: 0.0790 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1212 - lr: 1.0000e-05 - 83ms/epoch - 8ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0100 - mse: 0.0100 - mae: 0.0792 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1211 - lr: 1.0000e-05 - 90ms/epoch - 9ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0104 - mse: 0.0104 - mae: 0.0787 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1211 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0103 - mse: 0.0103 - mae: 0.0776 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1211 - lr: 1.0000e-05 - 91ms/epoch - 9ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0108 - mse: 0.0108 - mae: 0.0813 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1211 - lr: 1.0000e-05 - 94ms/epoch - 9ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0785 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1211 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0114 - mse: 0.0114 - mae: 0.0812 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1210 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0108 - mse: 0.0108 - mae: 0.0800 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1211 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0781 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1211 - lr: 1.0000e-05 - 85ms/epoch - 9ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0096 - mse: 0.0096 - mae: 0.0754 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1211 - lr: 1.0000e-05 - 97ms/epoch - 10ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0804 - val_loss: 0.0219 - val_mse: 0.0219 - val_mae: 0.1212 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0108 - mse: 0.0108 - mae: 0.0804 - val_loss: 0.0219 - val_mse: 0.0219 - val_mae: 0.1213 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0769 - val_loss: 0.0219 - val_mse: 0.0219 - val_mae: 0.1214 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0791 - val_loss: 0.0219 - val_mse: 0.0219 - val_mae: 0.1214 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.01923
10/10 - 0s - loss: 0.0094 - mse: 0.0094 - mae: 0.0775 - val_loss: 0.0220 - val_mse: 0.0220 - val_mae: 0.1215 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 00068: early stopping
SMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 52.20083682521797 
RMSE:	 7.225014659169763 
MAPE:	 5.885308101636885

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 63.36969283161608 
RMSE:	 7.960508327463522 
MAPE:	 6.712143148682283

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	41.04% Accuracy
MSE:	 71.61670961743842 
RMSE:	 8.462665633087392 
MAPE:	 6.541482431642796

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 168.8259463108875 
RMSE:	 12.99330390281423 
MAPE:	 11.83342735128442
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17007.733, Time=3.45 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14576.593, Time=5.19 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16469.294, Time=9.52 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14574.593, Time=8.18 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16346.513, Time=10.74 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16569.862, Time=11.90 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16356.870, Time=18.04 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17033.457, Time=6.48 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17006.582, Time=3.84 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17089.434, Time=7.42 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=-15789.397, Time=14.33 sec
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-15386.395, Time=26.84 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=47.433, Time=7.29 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 133.243 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.717
Date:                Sun, 12 Dec 2021   AIC                         -17089.434
Time:                        23:08:46   BIC                         -16972.163
Sample:                             0   HQIC                        -17044.397
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.222e-10   9.26e-21   -2.4e+10      0.000   -2.22e-10   -2.22e-10
x2         -2.175e-10   9.18e-21  -2.37e+10      0.000   -2.18e-10   -2.18e-10
x3         -2.088e-10   8.98e-21  -2.33e+10      0.000   -2.09e-10   -2.09e-10
x4             1.0000   9.08e-21    1.1e+20      0.000       1.000       1.000
x5         -1.927e-10   8.64e-21  -2.23e+10      0.000   -1.93e-10   -1.93e-10
x6          -1.33e-09   2.17e-20  -6.14e+10      0.000   -1.33e-09   -1.33e-09
x7         -2.053e-10   8.93e-21   -2.3e+10      0.000   -2.05e-10   -2.05e-10
x8         -1.999e-10   8.84e-21  -2.26e+10      0.000      -2e-10      -2e-10
x9           -3.6e-11   1.09e-21  -3.29e+10      0.000    -3.6e-11    -3.6e-11
x10        -9.188e-11   3.87e-21  -2.37e+10      0.000   -9.19e-11   -9.19e-11
x11        -2.014e-10   8.86e-21  -2.27e+10      0.000   -2.01e-10   -2.01e-10
x12        -1.994e-10   8.77e-21  -2.27e+10      0.000   -1.99e-10   -1.99e-10
x13        -2.115e-10   9.05e-21  -2.34e+10      0.000   -2.12e-10   -2.12e-10
x14        -1.723e-09    2.6e-20  -6.63e+10      0.000   -1.72e-09   -1.72e-09
x15        -2.116e-10    9.1e-21  -2.33e+10      0.000   -2.12e-10   -2.12e-10
x16        -3.169e-10   1.11e-20  -2.85e+10      0.000   -3.17e-10   -3.17e-10
x17        -1.804e-10    8.4e-21  -2.15e+10      0.000    -1.8e-10    -1.8e-10
x18        -1.463e-10   7.54e-21  -1.94e+10      0.000   -1.46e-10   -1.46e-10
x19        -2.598e-10   1.01e-20  -2.58e+10      0.000    -2.6e-10    -2.6e-10
x20        -3.922e-10   1.24e-20  -3.18e+10      0.000   -3.92e-10   -3.92e-10
ar.L1         -0.4926   1.44e-22  -3.42e+21      0.000      -0.493      -0.493
ar.L2         -0.1937    8.6e-23  -2.25e+21      0.000      -0.194      -0.194
ar.L3         -0.0441   3.86e-23  -1.14e+21      0.000      -0.044      -0.044
ma.L1         -0.7085    3.3e-22  -2.15e+21      0.000      -0.709      -0.709
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  57.24   Jarque-Bera (JB):           3956070.89
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                             5.16
Prob(H) (two-sided):                  0.00   Kurtosis:                       346.28
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 5.5e+39. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.02610, saving model to LSTM7.h5
45/45 - 3s - loss: 0.1296 - mse: 0.1296 - mae: 0.2633 - val_loss: 0.0261 - val_mse: 0.0261 - val_mae: 0.1400 - lr: 0.0010 - 3s/epoch - 77ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.02610 to 0.01412, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0401 - mse: 0.0401 - mae: 0.1617 - val_loss: 0.0141 - val_mse: 0.0141 - val_mae: 0.0944 - lr: 0.0010 - 320ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.01412 to 0.00846, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0215 - mse: 0.0215 - mae: 0.1159 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0734 - lr: 0.0010 - 283ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.00846 to 0.00697, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0186 - mse: 0.0186 - mae: 0.1095 - val_loss: 0.0070 - val_mse: 0.0070 - val_mae: 0.0673 - lr: 0.0010 - 267ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.00697 to 0.00573, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0166 - mse: 0.0166 - mae: 0.1024 - val_loss: 0.0057 - val_mse: 0.0057 - val_mae: 0.0616 - lr: 0.0010 - 313ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00573
45/45 - 0s - loss: 0.0129 - mse: 0.0129 - mae: 0.0878 - val_loss: 0.0058 - val_mse: 0.0058 - val_mae: 0.0611 - lr: 0.0010 - 302ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.00573 to 0.00560, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0132 - mse: 0.0132 - mae: 0.0907 - val_loss: 0.0056 - val_mse: 0.0056 - val_mae: 0.0599 - lr: 0.0010 - 307ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.00560 to 0.00424, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0132 - mse: 0.0132 - mae: 0.0900 - val_loss: 0.0042 - val_mse: 0.0042 - val_mae: 0.0523 - lr: 0.0010 - 313ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00424
45/45 - 0s - loss: 0.0111 - mse: 0.0111 - mae: 0.0836 - val_loss: 0.0061 - val_mse: 0.0061 - val_mae: 0.0617 - lr: 0.0010 - 298ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00424
45/45 - 0s - loss: 0.0151 - mse: 0.0151 - mae: 0.0970 - val_loss: 0.0044 - val_mse: 0.0044 - val_mae: 0.0520 - lr: 0.0010 - 261ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.00424 to 0.00401, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0139 - mse: 0.0139 - mae: 0.0919 - val_loss: 0.0040 - val_mse: 0.0040 - val_mae: 0.0496 - lr: 0.0010 - 292ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.00401 to 0.00395, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0148 - mse: 0.0148 - mae: 0.0958 - val_loss: 0.0039 - val_mse: 0.0039 - val_mae: 0.0492 - lr: 0.0010 - 330ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.00395 to 0.00365, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0133 - mse: 0.0133 - mae: 0.0915 - val_loss: 0.0037 - val_mse: 0.0037 - val_mae: 0.0485 - lr: 0.0010 - 287ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00365
45/45 - 0s - loss: 0.0131 - mse: 0.0131 - mae: 0.0914 - val_loss: 0.0038 - val_mse: 0.0038 - val_mae: 0.0484 - lr: 0.0010 - 282ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00365
45/45 - 0s - loss: 0.0141 - mse: 0.0141 - mae: 0.0970 - val_loss: 0.0039 - val_mse: 0.0039 - val_mae: 0.0488 - lr: 0.0010 - 294ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00365
45/45 - 0s - loss: 0.0130 - mse: 0.0130 - mae: 0.0904 - val_loss: 0.0037 - val_mse: 0.0037 - val_mae: 0.0489 - lr: 0.0010 - 293ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00365
45/45 - 0s - loss: 0.0115 - mse: 0.0115 - mae: 0.0852 - val_loss: 0.0044 - val_mse: 0.0044 - val_mae: 0.0512 - lr: 0.0010 - 254ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00018: val_loss did not improve from 0.00365
45/45 - 0s - loss: 0.0117 - mse: 0.0117 - mae: 0.0878 - val_loss: 0.0050 - val_mse: 0.0050 - val_mae: 0.0543 - lr: 0.0010 - 279ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.00365 to 0.00355, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0188 - mse: 0.0188 - mae: 0.1110 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0483 - lr: 1.0000e-04 - 290ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0658 - val_loss: 0.0037 - val_mse: 0.0037 - val_mae: 0.0501 - lr: 1.0000e-04 - 258ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0594 - val_loss: 0.0037 - val_mse: 0.0037 - val_mae: 0.0498 - lr: 1.0000e-04 - 303ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0577 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0491 - lr: 1.0000e-04 - 287ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00023: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0565 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0487 - lr: 1.0000e-04 - 274ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0573 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0487 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0556 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0486 - lr: 1.0000e-05 - 293ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0580 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0486 - lr: 1.0000e-05 - 298ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0544 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0486 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00028: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0538 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0486 - lr: 1.0000e-05 - 292ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0558 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0486 - lr: 1.0000e-05 - 295ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0555 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0486 - lr: 1.0000e-05 - 288ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0569 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0486 - lr: 1.0000e-05 - 280ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0567 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0485 - lr: 1.0000e-05 - 291ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0545 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0485 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0550 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0484 - lr: 1.0000e-05 - 288ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0560 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0484 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0545 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0484 - lr: 1.0000e-05 - 282ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0569 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0484 - lr: 1.0000e-05 - 259ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0546 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0484 - lr: 1.0000e-05 - 293ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0559 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0484 - lr: 1.0000e-05 - 294ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0542 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0484 - lr: 1.0000e-05 - 299ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0553 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0484 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0540 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0483 - lr: 1.0000e-05 - 305ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0555 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0483 - lr: 1.0000e-05 - 305ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0551 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0483 - lr: 1.0000e-05 - 288ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0543 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0483 - lr: 1.0000e-05 - 295ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0539 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0483 - lr: 1.0000e-05 - 290ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0539 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0483 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0553 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0482 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0550 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0482 - lr: 1.0000e-05 - 288ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0529 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0482 - lr: 1.0000e-05 - 281ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0528 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0482 - lr: 1.0000e-05 - 282ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0552 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0482 - lr: 1.0000e-05 - 286ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0538 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0483 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0547 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0483 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0545 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0483 - lr: 1.0000e-05 - 284ms/epoch - 6ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0563 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0483 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0541 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0483 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0568 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0484 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0548 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0484 - lr: 1.0000e-05 - 281ms/epoch - 6ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0558 - val_loss: 0.0036 - val_mse: 0.0036 - val_mae: 0.0484 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0535 - val_loss: 0.0037 - val_mse: 0.0037 - val_mae: 0.0484 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0527 - val_loss: 0.0037 - val_mse: 0.0037 - val_mae: 0.0484 - lr: 1.0000e-05 - 298ms/epoch - 7ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0544 - val_loss: 0.0037 - val_mse: 0.0037 - val_mae: 0.0485 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0548 - val_loss: 0.0037 - val_mse: 0.0037 - val_mae: 0.0485 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0525 - val_loss: 0.0037 - val_mse: 0.0037 - val_mae: 0.0485 - lr: 1.0000e-05 - 260ms/epoch - 6ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0557 - val_loss: 0.0037 - val_mse: 0.0037 - val_mae: 0.0486 - lr: 1.0000e-05 - 293ms/epoch - 7ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0531 - val_loss: 0.0037 - val_mse: 0.0037 - val_mae: 0.0486 - lr: 1.0000e-05 - 285ms/epoch - 6ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0528 - val_loss: 0.0037 - val_mse: 0.0037 - val_mae: 0.0486 - lr: 1.0000e-05 - 291ms/epoch - 6ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.00355
45/45 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0527 - val_loss: 0.0037 - val_mse: 0.0037 - val_mae: 0.0487 - lr: 1.0000e-05 - 284ms/epoch - 6ms/step
Epoch 00069: early stopping
SMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 52.20083682521797 
RMSE:	 7.225014659169763 
MAPE:	 5.885308101636885

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 63.36969283161608 
RMSE:	 7.960508327463522 
MAPE:	 6.712143148682283

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	41.04% Accuracy
MSE:	 71.61670961743842 
RMSE:	 8.462665633087392 
MAPE:	 6.541482431642796

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 168.8259463108875 
RMSE:	 12.99330390281423 
MAPE:	 11.83342735128442

KAMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 69.29548051216841 
RMSE:	 8.324390699154408 
MAPE:	 6.942234066695414
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.792, Time=3.99 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14576.592, Time=5.10 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16618.742, Time=8.55 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14574.592, Time=7.84 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-17004.301, Time=4.11 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15715.779, Time=22.91 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=inf, Time=3.83 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17007.442, Time=3.82 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17188.392, Time=16.95 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17002.377, Time=4.06 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=-16356.269, Time=14.91 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 96.077 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood                8618.196
Date:                Sun, 12 Dec 2021   AIC                         -17188.392
Time:                        23:22:08   BIC                         -17075.812
Sample:                             0   HQIC                        -17145.157
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -3.582e-10   2.18e-20  -1.64e+10      0.000   -3.58e-10   -3.58e-10
x2         -3.575e-10   2.25e-20  -1.59e+10      0.000   -3.57e-10   -3.57e-10
x3         -3.653e-10   2.09e-20  -1.75e+10      0.000   -3.65e-10   -3.65e-10
x4             1.0000   2.18e-20   4.59e+19      0.000       1.000       1.000
x5         -3.252e-10   2.07e-20  -1.57e+10      0.000   -3.25e-10   -3.25e-10
x6         -7.157e-09   1.78e-19  -4.03e+10      0.000   -7.16e-09   -7.16e-09
x7          -3.29e-10   2.09e-20  -1.58e+10      0.000   -3.29e-10   -3.29e-10
x8          -3.28e-10   2.12e-20  -1.54e+10      0.000   -3.28e-10   -3.28e-10
x9         -1.775e-10   1.29e-21  -1.37e+11      0.000   -1.77e-10   -1.77e-10
x10         -2.94e-10    5.5e-21  -5.34e+10      0.000   -2.94e-10   -2.94e-10
x11        -3.247e-10   2.11e-20  -1.54e+10      0.000   -3.25e-10   -3.25e-10
x12        -3.357e-10   2.11e-20  -1.59e+10      0.000   -3.36e-10   -3.36e-10
x13         -3.46e-10   2.14e-20  -1.62e+10      0.000   -3.46e-10   -3.46e-10
x14        -2.825e-09   6.25e-20  -4.52e+10      0.000   -2.82e-09   -2.82e-09
x15        -3.957e-10   2.33e-20  -1.69e+10      0.000   -3.96e-10   -3.96e-10
x16        -2.548e-10   1.87e-20  -1.36e+10      0.000   -2.55e-10   -2.55e-10
x17        -2.495e-10   1.85e-20  -1.35e+10      0.000   -2.49e-10   -2.49e-10
x18        -1.073e-09   3.84e-20  -2.79e+10      0.000   -1.07e-09   -1.07e-09
x19        -4.343e-10   2.45e-20  -1.78e+10      0.000   -4.34e-10   -4.34e-10
x20        -1.047e-09   3.78e-20  -2.77e+10      0.000   -1.05e-09   -1.05e-09
ar.L1         -1.2157   8.99e-23  -1.35e+22      0.000      -1.216      -1.216
ar.L2         -0.9187   9.81e-23  -9.36e+21      0.000      -0.919      -0.919
ar.L3         -0.4095   9.98e-23   -4.1e+21      0.000      -0.409      -0.409
sigma2      7.969e-11   6.92e-11      1.151      0.250    -5.6e-11    2.15e-10
===================================================================================
Ljung-Box (L1) (Q):                   2.47   Jarque-Bera (JB):             15463.35
Prob(Q):                              0.12   Prob(JB):                         0.00
Heteroskedasticity (H):               0.35   Skew:                             0.62
Prob(H) (two-sided):                  0.00   Kurtosis:                        24.44
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.74e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.12752, saving model to LSTM7.h5
58/58 - 3s - loss: 0.2627 - mse: 0.2627 - mae: 0.3809 - val_loss: 0.1275 - val_mse: 0.1275 - val_mae: 0.2964 - lr: 0.0010 - 3s/epoch - 55ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.12752 to 0.07458, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0511 - mse: 0.0511 - mae: 0.1784 - val_loss: 0.0746 - val_mse: 0.0746 - val_mae: 0.2065 - lr: 0.0010 - 361ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.07458 to 0.07263, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0255 - mse: 0.0255 - mae: 0.1284 - val_loss: 0.0726 - val_mse: 0.0726 - val_mae: 0.2005 - lr: 0.0010 - 395ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.07263
58/58 - 0s - loss: 0.0174 - mse: 0.0174 - mae: 0.1038 - val_loss: 0.0820 - val_mse: 0.0820 - val_mae: 0.2183 - lr: 0.0010 - 346ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.07263
58/58 - 0s - loss: 0.0134 - mse: 0.0134 - mae: 0.0920 - val_loss: 0.0761 - val_mse: 0.0761 - val_mae: 0.2104 - lr: 0.0010 - 338ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.07263
58/58 - 0s - loss: 0.0117 - mse: 0.0117 - mae: 0.0870 - val_loss: 0.0749 - val_mse: 0.0749 - val_mae: 0.2107 - lr: 0.0010 - 360ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.07263
58/58 - 0s - loss: 0.0118 - mse: 0.0118 - mae: 0.0848 - val_loss: 0.0776 - val_mse: 0.0776 - val_mae: 0.2183 - lr: 0.0010 - 333ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00008: val_loss did not improve from 0.07263
58/58 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0743 - val_loss: 0.0860 - val_mse: 0.0860 - val_mae: 0.2379 - lr: 0.0010 - 340ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.07263 to 0.06375, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0160 - mse: 0.0160 - mae: 0.1015 - val_loss: 0.0637 - val_mse: 0.0637 - val_mae: 0.1932 - lr: 1.0000e-04 - 370ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.06375 to 0.06277, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0084 - mse: 0.0084 - mae: 0.0727 - val_loss: 0.0628 - val_mse: 0.0628 - val_mae: 0.1904 - lr: 1.0000e-04 - 361ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0709 - val_loss: 0.0632 - val_mse: 0.0632 - val_mae: 0.1907 - lr: 1.0000e-04 - 370ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0711 - val_loss: 0.0636 - val_mse: 0.0636 - val_mae: 0.1913 - lr: 1.0000e-04 - 350ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0697 - val_loss: 0.0637 - val_mse: 0.0637 - val_mae: 0.1913 - lr: 1.0000e-04 - 346ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0680 - val_loss: 0.0641 - val_mse: 0.0641 - val_mae: 0.1917 - lr: 1.0000e-04 - 355ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00015: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0669 - val_loss: 0.0651 - val_mse: 0.0651 - val_mae: 0.1935 - lr: 1.0000e-04 - 345ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0677 - val_loss: 0.0647 - val_mse: 0.0647 - val_mae: 0.1925 - lr: 1.0000e-05 - 356ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0667 - val_loss: 0.0647 - val_mse: 0.0647 - val_mae: 0.1925 - lr: 1.0000e-05 - 358ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0640 - val_loss: 0.0649 - val_mse: 0.0649 - val_mae: 0.1928 - lr: 1.0000e-05 - 337ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0625 - val_loss: 0.0645 - val_mse: 0.0645 - val_mae: 0.1921 - lr: 1.0000e-05 - 347ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00020: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0664 - val_loss: 0.0641 - val_mse: 0.0641 - val_mae: 0.1912 - lr: 1.0000e-05 - 353ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0683 - val_loss: 0.0639 - val_mse: 0.0639 - val_mae: 0.1908 - lr: 1.0000e-05 - 345ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0661 - val_loss: 0.0640 - val_mse: 0.0640 - val_mae: 0.1908 - lr: 1.0000e-05 - 343ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0644 - val_loss: 0.0643 - val_mse: 0.0643 - val_mae: 0.1916 - lr: 1.0000e-05 - 358ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0628 - val_loss: 0.0646 - val_mse: 0.0646 - val_mae: 0.1920 - lr: 1.0000e-05 - 326ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0631 - val_loss: 0.0647 - val_mse: 0.0647 - val_mae: 0.1922 - lr: 1.0000e-05 - 347ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0620 - val_loss: 0.0643 - val_mse: 0.0643 - val_mae: 0.1915 - lr: 1.0000e-05 - 341ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0640 - val_loss: 0.0641 - val_mse: 0.0641 - val_mae: 0.1909 - lr: 1.0000e-05 - 327ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0620 - val_loss: 0.0642 - val_mse: 0.0642 - val_mae: 0.1912 - lr: 1.0000e-05 - 342ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0648 - val_loss: 0.0647 - val_mse: 0.0647 - val_mae: 0.1921 - lr: 1.0000e-05 - 350ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0621 - val_loss: 0.0645 - val_mse: 0.0645 - val_mae: 0.1917 - lr: 1.0000e-05 - 343ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0639 - val_loss: 0.0645 - val_mse: 0.0645 - val_mae: 0.1917 - lr: 1.0000e-05 - 351ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0631 - val_loss: 0.0643 - val_mse: 0.0643 - val_mae: 0.1911 - lr: 1.0000e-05 - 350ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0639 - val_loss: 0.0646 - val_mse: 0.0646 - val_mae: 0.1918 - lr: 1.0000e-05 - 355ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0637 - val_loss: 0.0648 - val_mse: 0.0648 - val_mae: 0.1922 - lr: 1.0000e-05 - 373ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0633 - val_loss: 0.0647 - val_mse: 0.0647 - val_mae: 0.1920 - lr: 1.0000e-05 - 337ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0643 - val_loss: 0.0648 - val_mse: 0.0648 - val_mae: 0.1922 - lr: 1.0000e-05 - 346ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0653 - val_loss: 0.0650 - val_mse: 0.0650 - val_mae: 0.1927 - lr: 1.0000e-05 - 372ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0651 - val_loss: 0.0647 - val_mse: 0.0647 - val_mae: 0.1920 - lr: 1.0000e-05 - 340ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0643 - val_loss: 0.0645 - val_mse: 0.0645 - val_mae: 0.1915 - lr: 1.0000e-05 - 352ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0661 - val_loss: 0.0646 - val_mse: 0.0646 - val_mae: 0.1917 - lr: 1.0000e-05 - 360ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0632 - val_loss: 0.0644 - val_mse: 0.0644 - val_mae: 0.1911 - lr: 1.0000e-05 - 334ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0643 - val_loss: 0.0649 - val_mse: 0.0649 - val_mae: 0.1923 - lr: 1.0000e-05 - 338ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0620 - val_loss: 0.0647 - val_mse: 0.0647 - val_mae: 0.1919 - lr: 1.0000e-05 - 338ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0677 - val_loss: 0.0656 - val_mse: 0.0656 - val_mae: 0.1937 - lr: 1.0000e-05 - 357ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0641 - val_loss: 0.0659 - val_mse: 0.0659 - val_mae: 0.1943 - lr: 1.0000e-05 - 343ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0659 - val_loss: 0.0660 - val_mse: 0.0660 - val_mae: 0.1946 - lr: 1.0000e-05 - 342ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0617 - val_loss: 0.0659 - val_mse: 0.0659 - val_mae: 0.1943 - lr: 1.0000e-05 - 349ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0626 - val_loss: 0.0658 - val_mse: 0.0658 - val_mae: 0.1941 - lr: 1.0000e-05 - 345ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0620 - val_loss: 0.0655 - val_mse: 0.0655 - val_mae: 0.1936 - lr: 1.0000e-05 - 344ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0584 - val_loss: 0.0656 - val_mse: 0.0656 - val_mae: 0.1938 - lr: 1.0000e-05 - 350ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0630 - val_loss: 0.0661 - val_mse: 0.0661 - val_mae: 0.1947 - lr: 1.0000e-05 - 358ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0622 - val_loss: 0.0663 - val_mse: 0.0663 - val_mae: 0.1951 - lr: 1.0000e-05 - 341ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0648 - val_loss: 0.0664 - val_mse: 0.0664 - val_mae: 0.1953 - lr: 1.0000e-05 - 338ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0625 - val_loss: 0.0663 - val_mse: 0.0663 - val_mae: 0.1950 - lr: 1.0000e-05 - 339ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0621 - val_loss: 0.0658 - val_mse: 0.0658 - val_mae: 0.1939 - lr: 1.0000e-05 - 352ms/epoch - 6ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0597 - val_loss: 0.0655 - val_mse: 0.0655 - val_mae: 0.1934 - lr: 1.0000e-05 - 351ms/epoch - 6ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0637 - val_loss: 0.0660 - val_mse: 0.0660 - val_mae: 0.1944 - lr: 1.0000e-05 - 346ms/epoch - 6ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0626 - val_loss: 0.0658 - val_mse: 0.0658 - val_mae: 0.1941 - lr: 1.0000e-05 - 340ms/epoch - 6ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0608 - val_loss: 0.0659 - val_mse: 0.0659 - val_mae: 0.1942 - lr: 1.0000e-05 - 343ms/epoch - 6ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.06277
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0621 - val_loss: 0.0659 - val_mse: 0.0659 - val_mae: 0.1942 - lr: 1.0000e-05 - 342ms/epoch - 6ms/step
Epoch 00060: early stopping
SMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 52.20083682521797 
RMSE:	 7.225014659169763 
MAPE:	 5.885308101636885

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 63.36969283161608 
RMSE:	 7.960508327463522 
MAPE:	 6.712143148682283

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	41.04% Accuracy
MSE:	 71.61670961743842 
RMSE:	 8.462665633087392 
MAPE:	 6.541482431642796

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 168.8259463108875 
RMSE:	 12.99330390281423 
MAPE:	 11.83342735128442

KAMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 69.29548051216841 
RMSE:	 8.324390699154408 
MAPE:	 6.942234066695414

MIDPOINT
Prediction vs Close:		48.88% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 20.287194417114076 
RMSE:	 4.504130817051618 
MAPE:	 3.6982533278355527
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17007.439, Time=3.49 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-13714.163, Time=6.09 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-14620.288, Time=5.40 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-16512.116, Time=12.41 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-17085.548, Time=10.85 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17009.877, Time=3.56 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17089.740, Time=7.94 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17006.211, Time=3.76 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=-17349.997, Time=19.17 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17006.024, Time=4.31 sec
 ARIMA(3,3,3)(0,0,0)[0]             : AIC=-14720.521, Time=14.42 sec
 ARIMA(2,3,3)(0,0,0)[0]             : AIC=-16599.516, Time=14.49 sec
 ARIMA(3,3,2)(0,0,0)[0] intercept   : AIC=-13110.324, Time=18.62 sec

Best model:  ARIMA(3,3,2)(0,0,0)[0]          
Total fit time: 124.535 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 2)   Log Likelihood                8700.998
Date:                Sun, 12 Dec 2021   AIC                         -17349.997
Time:                        23:27:32   BIC                         -17228.035
Sample:                             0   HQIC                        -17303.158
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          4.251e-09   2.48e-05      0.000      1.000   -4.85e-05    4.85e-05
x2          4.257e-09   2.48e-05      0.000      1.000   -4.86e-05    4.87e-05
x3          4.244e-09   2.34e-05      0.000      1.000   -4.58e-05    4.58e-05
x4             1.0000   2.37e-05   4.23e+04      0.000       1.000       1.000
x5          4.344e-09   2.35e-05      0.000      1.000    -4.6e-05     4.6e-05
x6          3.064e-09   6.26e-05   4.89e-05      1.000      -0.000       0.000
x7           4.26e-09   3.09e-05      0.000      1.000   -6.05e-05    6.05e-05
x8            -0.0001   4.28e-05     -2.782      0.005      -0.000   -3.51e-05
x9         -3.943e-09   4.01e-06     -0.001      0.999   -7.86e-06    7.85e-06
x10        -1.431e-05    9.6e-05     -0.149      0.881      -0.000       0.000
x11            0.0001   3.13e-05      3.693      0.000    5.42e-05       0.000
x12         1.616e-06   5.46e-05      0.030      0.976      -0.000       0.000
x13         4.247e-09   2.49e-05      0.000      1.000   -4.87e-05    4.87e-05
x14        -1.778e-08   5.56e-05     -0.000      1.000      -0.000       0.000
x15         4.488e-09      3e-05      0.000      1.000   -5.88e-05    5.88e-05
x16        -6.718e-09   4.66e-05     -0.000      1.000   -9.13e-05    9.13e-05
x17         3.935e-09    8.3e-06      0.000      1.000   -1.63e-05    1.63e-05
x18        -2.742e-08      0.000     -0.000      1.000      -0.000       0.000
x19         4.464e-09   4.48e-05   9.97e-05      1.000   -8.78e-05    8.78e-05
x20          4.06e-09      0.000   8.55e-06      1.000      -0.001       0.001
ar.L1         -1.2437   2.38e-08  -5.23e+07      0.000      -1.244      -1.244
ar.L2         -0.5344   9.34e-09  -5.72e+07      0.000      -0.534      -0.534
ar.L3         -0.1491   9.43e-10  -1.58e+08      0.000      -0.149      -0.149
ma.L1         -0.2521   9.13e-09  -2.76e+07      0.000      -0.252      -0.252
ma.L2         -0.7294   1.95e-08  -3.75e+07      0.000      -0.729      -0.729
sigma2      6.455e-11   6.89e-11      0.937      0.349   -7.05e-11       2e-10
===================================================================================
Ljung-Box (L1) (Q):                  30.63   Jarque-Bera (JB):           6336314.18
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            13.86
Prob(H) (two-sided):                  0.00   Kurtosis:                       436.75
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.35e+27. Standard errors may be unstable.
ARIMA order: (3, 3, 2) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.07401, saving model to LSTM7.h5
43/43 - 4s - loss: 0.0847 - mse: 0.0847 - mae: 0.2234 - val_loss: 0.0740 - val_mse: 0.0740 - val_mae: 0.2183 - lr: 0.0010 - 4s/epoch - 84ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.07401
43/43 - 0s - loss: 0.0201 - mse: 0.0201 - mae: 0.1129 - val_loss: 0.0798 - val_mse: 0.0798 - val_mae: 0.2336 - lr: 0.0010 - 269ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.07401
43/43 - 0s - loss: 0.0109 - mse: 0.0109 - mae: 0.0826 - val_loss: 0.0963 - val_mse: 0.0963 - val_mae: 0.2658 - lr: 0.0010 - 252ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.07401
43/43 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0787 - val_loss: 0.0889 - val_mse: 0.0889 - val_mae: 0.2555 - lr: 0.0010 - 303ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.07401
43/43 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0728 - val_loss: 0.0884 - val_mse: 0.0884 - val_mae: 0.2562 - lr: 0.0010 - 286ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.07401
43/43 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0745 - val_loss: 0.0797 - val_mse: 0.0797 - val_mae: 0.2413 - lr: 0.0010 - 265ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.07401 to 0.06324, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0163 - mse: 0.0163 - mae: 0.1056 - val_loss: 0.0632 - val_mse: 0.0632 - val_mae: 0.2070 - lr: 1.0000e-04 - 319ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0714 - val_loss: 0.0666 - val_mse: 0.0666 - val_mae: 0.2140 - lr: 1.0000e-04 - 279ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0084 - mse: 0.0084 - mae: 0.0724 - val_loss: 0.0662 - val_mse: 0.0662 - val_mae: 0.2129 - lr: 1.0000e-04 - 273ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0722 - val_loss: 0.0659 - val_mse: 0.0659 - val_mae: 0.2118 - lr: 1.0000e-04 - 261ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0713 - val_loss: 0.0670 - val_mse: 0.0670 - val_mae: 0.2137 - lr: 1.0000e-04 - 270ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0692 - val_loss: 0.0684 - val_mse: 0.0684 - val_mae: 0.2165 - lr: 1.0000e-04 - 270ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0641 - val_loss: 0.0683 - val_mse: 0.0683 - val_mae: 0.2164 - lr: 1.0000e-05 - 241ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0647 - val_loss: 0.0682 - val_mse: 0.0682 - val_mae: 0.2159 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0657 - val_loss: 0.0681 - val_mse: 0.0681 - val_mae: 0.2158 - lr: 1.0000e-05 - 280ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0654 - val_loss: 0.0680 - val_mse: 0.0680 - val_mae: 0.2156 - lr: 1.0000e-05 - 297ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0622 - val_loss: 0.0681 - val_mse: 0.0681 - val_mae: 0.2157 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0656 - val_loss: 0.0682 - val_mse: 0.0682 - val_mae: 0.2159 - lr: 1.0000e-05 - 262ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0660 - val_loss: 0.0681 - val_mse: 0.0681 - val_mae: 0.2158 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0656 - val_loss: 0.0682 - val_mse: 0.0682 - val_mae: 0.2159 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0612 - val_loss: 0.0684 - val_mse: 0.0684 - val_mae: 0.2163 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0647 - val_loss: 0.0687 - val_mse: 0.0687 - val_mae: 0.2170 - lr: 1.0000e-05 - 301ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0641 - val_loss: 0.0692 - val_mse: 0.0692 - val_mae: 0.2179 - lr: 1.0000e-05 - 264ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0631 - val_loss: 0.0694 - val_mse: 0.0694 - val_mae: 0.2183 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0629 - val_loss: 0.0695 - val_mse: 0.0695 - val_mae: 0.2186 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0634 - val_loss: 0.0696 - val_mse: 0.0696 - val_mae: 0.2188 - lr: 1.0000e-05 - 293ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0620 - val_loss: 0.0697 - val_mse: 0.0697 - val_mae: 0.2190 - lr: 1.0000e-05 - 280ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0661 - val_loss: 0.0696 - val_mse: 0.0696 - val_mae: 0.2186 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0650 - val_loss: 0.0698 - val_mse: 0.0698 - val_mae: 0.2190 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0637 - val_loss: 0.0700 - val_mse: 0.0700 - val_mae: 0.2194 - lr: 1.0000e-05 - 280ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0658 - val_loss: 0.0699 - val_mse: 0.0699 - val_mae: 0.2192 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0624 - val_loss: 0.0702 - val_mse: 0.0702 - val_mae: 0.2199 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0632 - val_loss: 0.0705 - val_mse: 0.0705 - val_mae: 0.2203 - lr: 1.0000e-05 - 294ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0617 - val_loss: 0.0702 - val_mse: 0.0702 - val_mae: 0.2198 - lr: 1.0000e-05 - 261ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0653 - val_loss: 0.0705 - val_mse: 0.0705 - val_mae: 0.2203 - lr: 1.0000e-05 - 255ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0624 - val_loss: 0.0708 - val_mse: 0.0708 - val_mae: 0.2210 - lr: 1.0000e-05 - 282ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0626 - val_loss: 0.0709 - val_mse: 0.0709 - val_mae: 0.2211 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0645 - val_loss: 0.0710 - val_mse: 0.0710 - val_mae: 0.2213 - lr: 1.0000e-05 - 257ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0634 - val_loss: 0.0710 - val_mse: 0.0710 - val_mae: 0.2214 - lr: 1.0000e-05 - 281ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0622 - val_loss: 0.0710 - val_mse: 0.0710 - val_mae: 0.2214 - lr: 1.0000e-05 - 300ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0636 - val_loss: 0.0712 - val_mse: 0.0712 - val_mae: 0.2218 - lr: 1.0000e-05 - 298ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0622 - val_loss: 0.0720 - val_mse: 0.0720 - val_mae: 0.2234 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0627 - val_loss: 0.0721 - val_mse: 0.0721 - val_mae: 0.2235 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0621 - val_loss: 0.0726 - val_mse: 0.0726 - val_mae: 0.2245 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0664 - val_loss: 0.0723 - val_mse: 0.0723 - val_mae: 0.2239 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0630 - val_loss: 0.0725 - val_mse: 0.0725 - val_mae: 0.2242 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0610 - val_loss: 0.0724 - val_mse: 0.0724 - val_mae: 0.2241 - lr: 1.0000e-05 - 262ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0611 - val_loss: 0.0727 - val_mse: 0.0727 - val_mae: 0.2246 - lr: 1.0000e-05 - 296ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0612 - val_loss: 0.0728 - val_mse: 0.0728 - val_mae: 0.2249 - lr: 1.0000e-05 - 260ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0626 - val_loss: 0.0728 - val_mse: 0.0728 - val_mae: 0.2248 - lr: 1.0000e-05 - 261ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0605 - val_loss: 0.0730 - val_mse: 0.0730 - val_mae: 0.2252 - lr: 1.0000e-05 - 299ms/epoch - 7ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0606 - val_loss: 0.0733 - val_mse: 0.0733 - val_mae: 0.2259 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0631 - val_loss: 0.0734 - val_mse: 0.0734 - val_mae: 0.2260 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0591 - val_loss: 0.0734 - val_mse: 0.0734 - val_mae: 0.2261 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0593 - val_loss: 0.0734 - val_mse: 0.0734 - val_mae: 0.2261 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0616 - val_loss: 0.0734 - val_mse: 0.0734 - val_mae: 0.2259 - lr: 1.0000e-05 - 280ms/epoch - 7ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.06324
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0633 - val_loss: 0.0734 - val_mse: 0.0734 - val_mae: 0.2259 - lr: 1.0000e-05 - 257ms/epoch - 6ms/step
Epoch 00057: early stopping
SMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 52.20083682521797 
RMSE:	 7.225014659169763 
MAPE:	 5.885308101636885

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 63.36969283161608 
RMSE:	 7.960508327463522 
MAPE:	 6.712143148682283

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	41.04% Accuracy
MSE:	 71.61670961743842 
RMSE:	 8.462665633087392 
MAPE:	 6.541482431642796

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 168.8259463108875 
RMSE:	 12.99330390281423 
MAPE:	 11.83342735128442

KAMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 69.29548051216841 
RMSE:	 8.324390699154408 
MAPE:	 6.942234066695414

MIDPOINT
Prediction vs Close:		48.88% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 20.287194417114076 
RMSE:	 4.504130817051618 
MAPE:	 3.6982533278355527

T3
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 50.48940209767674 
RMSE:	 7.1055894968451945 
MAPE:	 5.703419596047134
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16996.849, Time=3.64 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14177.794, Time=2.16 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16779.945, Time=8.11 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14417.099, Time=11.71 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16996.773, Time=3.90 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-14470.746, Time=10.02 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16999.230, Time=3.65 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14413.099, Time=14.36 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-16992.097, Time=4.88 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-16997.225, Time=3.56 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 66.007 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8522.615
Date:                Sun, 12 Dec 2021   AIC                         -16999.230
Time:                        23:37:02   BIC                         -16891.341
Sample:                             0   HQIC                        -16957.796
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1           2.33e-15      0.001   2.87e-12      1.000      -0.002       0.002
x2         -4.502e-16      0.000  -1.15e-12      1.000      -0.001       0.001
x3          3.943e-17      0.001   5.53e-14      1.000      -0.001       0.001
x4             1.0000      0.001   1486.752      0.000       0.999       1.001
x5         -1.326e-14      0.001  -2.01e-11      1.000      -0.001       0.001
x6         -7.238e-16   6.02e-05   -1.2e-11      1.000      -0.000       0.000
x7          4.644e-16      0.000   1.63e-12      1.000      -0.001       0.001
x8            -0.0003   6.84e-05     -4.783      0.000      -0.000      -0.000
x9          4.956e-16      0.001   8.09e-13      1.000      -0.001       0.001
x10        -5.078e-05      0.000     -0.169      0.866      -0.001       0.001
x11            0.0005   8.52e-05      5.342      0.000       0.000       0.001
x12        -6.163e-05   6.76e-05     -0.912      0.362      -0.000    7.08e-05
x13        -6.225e-17      0.000  -1.81e-13      1.000      -0.001       0.001
x14         2.723e-16      0.000   1.71e-12      1.000      -0.000       0.000
x15         2.531e-13    9.1e-05   2.78e-09      1.000      -0.000       0.000
x16        -3.448e-13      0.000  -1.94e-09      1.000      -0.000       0.000
x17         1.188e-12      0.000   1.15e-08      1.000      -0.000       0.000
x18        -5.746e-14      0.000  -5.12e-10      1.000      -0.000       0.000
x19        -2.336e-13      0.000  -2.29e-09      1.000      -0.000       0.000
x20        -9.777e-15      0.000  -9.27e-11      1.000      -0.000       0.000
ma.L1         -1.3477   4.17e-08  -3.23e+07      0.000      -1.348      -1.348
ma.L2          0.3862   8.11e-08   4.76e+06      0.000       0.386       0.386
sigma2          1e-10   7.38e-11      1.355      0.175   -4.46e-11    2.45e-10
===================================================================================
Ljung-Box (L1) (Q):                  50.19   Jarque-Bera (JB):           4788158.62
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.04   Skew:                           -10.02
Prob(H) (two-sided):                  0.00   Kurtosis:                       380.29
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 6.4e+24. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.02723, saving model to LSTM7.h5
90/90 - 3s - loss: 0.0623 - mse: 0.0623 - mae: 0.1833 - val_loss: 0.0272 - val_mse: 0.0272 - val_mae: 0.1499 - lr: 0.0010 - 3s/epoch - 37ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.02723 to 0.02648, saving model to LSTM7.h5
90/90 - 1s - loss: 0.0244 - mse: 0.0244 - mae: 0.1177 - val_loss: 0.0265 - val_mse: 0.0265 - val_mae: 0.1482 - lr: 0.0010 - 608ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0153 - mse: 0.0153 - mae: 0.0952 - val_loss: 0.0309 - val_mse: 0.0309 - val_mae: 0.1470 - lr: 0.0010 - 575ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.02648
90/90 - 0s - loss: 0.0149 - mse: 0.0149 - mae: 0.0937 - val_loss: 0.0448 - val_mse: 0.0448 - val_mae: 0.1514 - lr: 0.0010 - 499ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0175 - mse: 0.0175 - mae: 0.0996 - val_loss: 0.0397 - val_mse: 0.0397 - val_mae: 0.1437 - lr: 0.0010 - 527ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0166 - mse: 0.0166 - mae: 0.0979 - val_loss: 0.0420 - val_mse: 0.0420 - val_mae: 0.1437 - lr: 0.0010 - 514ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0131 - mse: 0.0131 - mae: 0.0868 - val_loss: 0.0521 - val_mse: 0.0521 - val_mae: 0.1603 - lr: 0.0010 - 587ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0166 - mse: 0.0166 - mae: 0.0983 - val_loss: 0.0309 - val_mse: 0.0309 - val_mae: 0.1314 - lr: 1.0000e-04 - 562ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0056 - mse: 0.0056 - mae: 0.0579 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1323 - lr: 1.0000e-04 - 506ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0051 - mse: 0.0051 - mae: 0.0560 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1343 - lr: 1.0000e-04 - 501ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0045 - mse: 0.0045 - mae: 0.0530 - val_loss: 0.0350 - val_mse: 0.0350 - val_mae: 0.1355 - lr: 1.0000e-04 - 591ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0048 - mse: 0.0048 - mae: 0.0534 - val_loss: 0.0360 - val_mse: 0.0360 - val_mae: 0.1367 - lr: 1.0000e-04 - 568ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0048 - mse: 0.0048 - mae: 0.0541 - val_loss: 0.0357 - val_mse: 0.0357 - val_mae: 0.1364 - lr: 1.0000e-05 - 522ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0049 - mse: 0.0049 - mae: 0.0531 - val_loss: 0.0354 - val_mse: 0.0354 - val_mae: 0.1361 - lr: 1.0000e-05 - 516ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0045 - mse: 0.0045 - mae: 0.0512 - val_loss: 0.0355 - val_mse: 0.0355 - val_mae: 0.1363 - lr: 1.0000e-05 - 569ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0043 - mse: 0.0043 - mae: 0.0504 - val_loss: 0.0353 - val_mse: 0.0353 - val_mae: 0.1361 - lr: 1.0000e-05 - 576ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0041 - mse: 0.0041 - mae: 0.0497 - val_loss: 0.0354 - val_mse: 0.0354 - val_mae: 0.1362 - lr: 1.0000e-05 - 575ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0043 - mse: 0.0043 - mae: 0.0503 - val_loss: 0.0358 - val_mse: 0.0358 - val_mae: 0.1367 - lr: 1.0000e-05 - 590ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0044 - mse: 0.0044 - mae: 0.0517 - val_loss: 0.0363 - val_mse: 0.0363 - val_mae: 0.1374 - lr: 1.0000e-05 - 569ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0045 - mse: 0.0045 - mae: 0.0525 - val_loss: 0.0364 - val_mse: 0.0364 - val_mae: 0.1375 - lr: 1.0000e-05 - 591ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0045 - mse: 0.0045 - mae: 0.0515 - val_loss: 0.0369 - val_mse: 0.0369 - val_mae: 0.1382 - lr: 1.0000e-05 - 582ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0043 - mse: 0.0043 - mae: 0.0508 - val_loss: 0.0374 - val_mse: 0.0374 - val_mae: 0.1388 - lr: 1.0000e-05 - 514ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0040 - mse: 0.0040 - mae: 0.0504 - val_loss: 0.0377 - val_mse: 0.0377 - val_mae: 0.1392 - lr: 1.0000e-05 - 505ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0043 - mse: 0.0043 - mae: 0.0509 - val_loss: 0.0381 - val_mse: 0.0381 - val_mae: 0.1397 - lr: 1.0000e-05 - 513ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0044 - mse: 0.0044 - mae: 0.0510 - val_loss: 0.0388 - val_mse: 0.0388 - val_mae: 0.1406 - lr: 1.0000e-05 - 586ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0043 - mse: 0.0043 - mae: 0.0513 - val_loss: 0.0393 - val_mse: 0.0393 - val_mae: 0.1413 - lr: 1.0000e-05 - 577ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0045 - mse: 0.0045 - mae: 0.0529 - val_loss: 0.0394 - val_mse: 0.0394 - val_mae: 0.1414 - lr: 1.0000e-05 - 584ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0038 - mse: 0.0038 - mae: 0.0494 - val_loss: 0.0397 - val_mse: 0.0397 - val_mae: 0.1419 - lr: 1.0000e-05 - 564ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0041 - mse: 0.0041 - mae: 0.0497 - val_loss: 0.0401 - val_mse: 0.0401 - val_mae: 0.1424 - lr: 1.0000e-05 - 582ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0041 - mse: 0.0041 - mae: 0.0507 - val_loss: 0.0403 - val_mse: 0.0403 - val_mae: 0.1428 - lr: 1.0000e-05 - 519ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0044 - mse: 0.0044 - mae: 0.0508 - val_loss: 0.0410 - val_mse: 0.0410 - val_mae: 0.1438 - lr: 1.0000e-05 - 568ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0038 - mse: 0.0038 - mae: 0.0479 - val_loss: 0.0417 - val_mse: 0.0417 - val_mae: 0.1449 - lr: 1.0000e-05 - 510ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0040 - mse: 0.0040 - mae: 0.0499 - val_loss: 0.0414 - val_mse: 0.0414 - val_mae: 0.1444 - lr: 1.0000e-05 - 540ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0041 - mse: 0.0041 - mae: 0.0481 - val_loss: 0.0421 - val_mse: 0.0421 - val_mae: 0.1454 - lr: 1.0000e-05 - 511ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0044 - mse: 0.0044 - mae: 0.0522 - val_loss: 0.0421 - val_mse: 0.0421 - val_mae: 0.1455 - lr: 1.0000e-05 - 526ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0047 - mse: 0.0047 - mae: 0.0526 - val_loss: 0.0415 - val_mse: 0.0415 - val_mae: 0.1445 - lr: 1.0000e-05 - 578ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0044 - mse: 0.0044 - mae: 0.0509 - val_loss: 0.0421 - val_mse: 0.0421 - val_mae: 0.1455 - lr: 1.0000e-05 - 513ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0040 - mse: 0.0040 - mae: 0.0486 - val_loss: 0.0424 - val_mse: 0.0424 - val_mae: 0.1459 - lr: 1.0000e-05 - 574ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0042 - mse: 0.0042 - mae: 0.0505 - val_loss: 0.0420 - val_mse: 0.0420 - val_mae: 0.1453 - lr: 1.0000e-05 - 572ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0041 - mse: 0.0041 - mae: 0.0507 - val_loss: 0.0420 - val_mse: 0.0420 - val_mae: 0.1454 - lr: 1.0000e-05 - 516ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0042 - mse: 0.0042 - mae: 0.0497 - val_loss: 0.0427 - val_mse: 0.0427 - val_mae: 0.1464 - lr: 1.0000e-05 - 582ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.02648
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0514 - val_loss: 0.0426 - val_mse: 0.0426 - val_mae: 0.1463 - lr: 1.0000e-05 - 498ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0043 - mse: 0.0043 - mae: 0.0506 - val_loss: 0.0433 - val_mse: 0.0433 - val_mae: 0.1473 - lr: 1.0000e-05 - 580ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0039 - mse: 0.0039 - mae: 0.0492 - val_loss: 0.0432 - val_mse: 0.0432 - val_mae: 0.1472 - lr: 1.0000e-05 - 509ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0043 - mse: 0.0043 - mae: 0.0510 - val_loss: 0.0432 - val_mse: 0.0432 - val_mae: 0.1472 - lr: 1.0000e-05 - 507ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0042 - mse: 0.0042 - mae: 0.0513 - val_loss: 0.0444 - val_mse: 0.0444 - val_mae: 0.1491 - lr: 1.0000e-05 - 535ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0044 - mse: 0.0044 - mae: 0.0514 - val_loss: 0.0443 - val_mse: 0.0443 - val_mae: 0.1490 - lr: 1.0000e-05 - 506ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0038 - mse: 0.0038 - mae: 0.0483 - val_loss: 0.0445 - val_mse: 0.0445 - val_mae: 0.1492 - lr: 1.0000e-05 - 575ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0043 - mse: 0.0043 - mae: 0.0518 - val_loss: 0.0446 - val_mse: 0.0446 - val_mae: 0.1494 - lr: 1.0000e-05 - 571ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0040 - mse: 0.0040 - mae: 0.0498 - val_loss: 0.0451 - val_mse: 0.0451 - val_mae: 0.1503 - lr: 1.0000e-05 - 574ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0043 - mse: 0.0043 - mae: 0.0507 - val_loss: 0.0461 - val_mse: 0.0461 - val_mae: 0.1518 - lr: 1.0000e-05 - 582ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.02648
90/90 - 1s - loss: 0.0042 - mse: 0.0042 - mae: 0.0499 - val_loss: 0.0470 - val_mse: 0.0470 - val_mae: 0.1532 - lr: 1.0000e-05 - 569ms/epoch - 6ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 52.20083682521797 
RMSE:	 7.225014659169763 
MAPE:	 5.885308101636885

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 63.36969283161608 
RMSE:	 7.960508327463522 
MAPE:	 6.712143148682283

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	41.04% Accuracy
MSE:	 71.61670961743842 
RMSE:	 8.462665633087392 
MAPE:	 6.541482431642796

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 168.8259463108875 
RMSE:	 12.99330390281423 
MAPE:	 11.83342735128442

KAMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 69.29548051216841 
RMSE:	 8.324390699154408 
MAPE:	 6.942234066695414

MIDPOINT
Prediction vs Close:		48.88% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 20.287194417114076 
RMSE:	 4.504130817051618 
MAPE:	 3.6982533278355527

T3
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 50.48940209767674 
RMSE:	 7.1055894968451945 
MAPE:	 5.703419596047134

TEMA
Prediction vs Close:		51.12% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 63.76180736926059 
RMSE:	 7.98509908324628 
MAPE:	 7.313429173529297
Runtime: min: 65.85398025745002

Architecture Used

In [ ]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving Experiment7.png to Experiment7 (2).png
In [ ]:
img = cv2.imread('Experiment7.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[ ]:
<matplotlib.image.AxesImage at 0x7f4c232a4350>

Model Plots

In [174]:
with open('simulation7_data.json') as json_file:
    simulation7 = json.load(json_file)
fileimg = 'Experiment7'
In [175]:
for i in range(len(list(simulation7.keys()))):
  SIM = list(simulation7.keys())[i]
  plot_train(simulation7,SIM)
  plot_test(simulation7,SIM)
----- Train RMSE for SMA ----- 9.057211794541606
----- Train_MSE_LSTM for SMA ----- 82.03308549118358
----- Train MAE LSTM for SMA ----- 7.94711277040072
----- Test RMSE for SMA----- 7.225014659169763
----- Test_MSE_LSTM for SMA----- 52.20083682521797
----- Test_MAE_LSTM for SMA----- 5.885308101636885
----- Train RMSE for EMA ----- 10.618347756101933
----- Train_MSE_LSTM for EMA ----- 112.74930906951495
----- Train MAE LSTM for EMA ----- 9.56987204528091
----- Test RMSE for EMA----- 7.960508327463522
----- Test_MSE_LSTM for EMA----- 63.36969283161608
----- Test_MAE_LSTM for EMA----- 6.712143148682283
----- Train RMSE for WMA ----- 10.936273635149417
----- Train_MSE_LSTM for WMA ----- 119.60208102286424
----- Train MAE LSTM for WMA ----- 9.916874330919605
----- Test RMSE for WMA----- 8.462665633087392
----- Test_MSE_LSTM for WMA----- 71.61670961743842
----- Test_MAE_LSTM for WMA----- 6.541482431642796
----- Train RMSE for DEMA ----- 12.811456939062188
----- Train_MSE_LSTM for DEMA ----- 164.1334289014447
----- Train MAE LSTM for DEMA ----- 11.627435864373526
----- Test RMSE for DEMA----- 12.99330390281423
----- Test_MSE_LSTM for DEMA----- 168.8259463108875
----- Test_MAE_LSTM for DEMA----- 11.83342735128442
----- Train RMSE for KAMA ----- 10.744564202390054
----- Train_MSE_LSTM for KAMA ----- 115.44565989928182
----- Train MAE LSTM for KAMA ----- 9.731041218642845
----- Test RMSE for KAMA----- 8.324390699154408
----- Test_MSE_LSTM for KAMA----- 69.29548051216841
----- Test_MAE_LSTM for KAMA----- 6.942234066695414
----- Train RMSE for MIDPOINT ----- 9.643330328183875
----- Train_MSE_LSTM for MIDPOINT ----- 92.99381981847093
----- Train MAE LSTM for MIDPOINT ----- 8.55559535246656
----- Test RMSE for MIDPOINT----- 4.504130817051618
----- Test_MSE_LSTM for MIDPOINT----- 20.287194417114076
----- Test_MAE_LSTM for MIDPOINT----- 3.6982533278355527
----- Train RMSE for T3 ----- 12.403273889547718
----- Train_MSE_LSTM for T3 ----- 153.84120317913616
----- Train MAE LSTM for T3 ----- 11.217496684263692
----- Test RMSE for T3----- 7.1055894968451945
----- Test_MSE_LSTM for T3----- 50.48940209767674
----- Test_MAE_LSTM for T3----- 5.703419596047134
----- Train RMSE for TEMA ----- 7.403494010381279
----- Train_MSE_LSTM for TEMA ----- 54.81172356175147
----- Train MAE LSTM for TEMA ----- 5.156797193308905
----- Test RMSE for TEMA----- 7.98509908324628
----- Test_MSE_LSTM for TEMA----- 63.76180736926059
----- Test_MAE_LSTM for TEMA----- 7.313429173529297

Arima w Exogenous Variable Multistep MutiVariate LSTM Hybrid Model Experiment 8

In [ ]:
def get_arima_exog(dataframe,original_data, train_len, test_len):    
    

    # prepare train and test data for exogenous vr
    X_value = pd.DataFrame(low_vol.iloc[:, :])
    y_value = pd.DataFrame(low_vol.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    X_scale_dataset = X_scaler.fit_transform(X_value)
    y_scale_dataset = y_scaler.fit_transform(y_value)
    # Get data and check shape
    # X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X_scale_dataset)
    y_train, y_test, = split_train_test(y_scale_dataset)
    yc_train,yc_test = split_train_test(low_vol_data)
    yc = yc_test.values.tolist()
    y_train_list = y_train.flatten().tolist()
    y_test_list = y_test.flatten().tolist()
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)

    # Initialize model
    model = auto_arima(y_train_list,exogenous  = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
            suppress_warnings=True,stepwise=True,seasonal=True)

      # Determine model parameters
    print(model.summary())
    model.fit(y_train_list,maxiter=200)
    order = model.get_params()['order']
    print('ARIMA order:', order, '\n')

      # Genereate predictions
    prediction = []
    for i in range(len(y_test_list)):
        model = pmdarima.ARIMA(order=order)
        model.fit(y_train_list)
        # print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')

        prediction.append(model.predict()[0])
        y_train_list.append(y_test_list[i])

    predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
    y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))

    # Generate error data
    mse = mean_squared_error(yc_test, predictionte)
    rmse = mse ** 0.5
    mae = mean_absolute_error(y_test_ , predictionte )
    return yc,predictionte.flatten().tolist(), mse, rmse, mae
In [ ]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det =20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # Option 1
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    # model.add(Dense(units=64,activation='relu'))
    # model.add(Dropout(0.5))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    # ## Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()


    # # # option 2
    # model = Sequential()
    # model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    # model.add(Dense(64))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma+' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 3
    # define custom activation
    # 
    # class Double_Tanh(Activation):
    #     def __init__(self, activation, **kwargs):
    #         super(Double_Tanh, self).__init__(activation, **kwargs)
    #         self.__name__ = 'double_tanh'

    # def double_tanh(x):
    #     return (K.tanh(x) * 2)

    # get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
    #     # Model Generation
    # model = Sequential()
    # #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    # model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    # model.add(Dense(1))
    # model.add(Activation(double_tanh))
    # model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma+' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # #Option 4
    # # Set up & fit LSTM RNN
    model = Sequential()
    model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(input_dim, feature_size)))
    model.add(LSTM(units=int(lstm_len/2)))
    model.add(Dense(1, activation='sigmoid'))
    model.compile(loss='mean_squared_error', optimizer='adam')
    # Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM8.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [ ]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation8 = {}
    imgfile = 'Experiment8'
    for ma in optimized_period:
                print(ma)
                print(functions[ma])
                print ( int( optimized_period[ma]))
              # if ma == 'SMA':
                low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
                low_vol = low_vol.fillna(0)
                low_vol_data = df['close']
                high_vol = pd.DataFrame()
                df2 = df.copy()
                for i in df2.columns:
                  if i in low_vol.columns:
                    high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
                high_vol_data = df['close']
                ## *****************************************************
                # Generate ARIMA and LSTM predictions
                print('\nWorking on ' + ma + ' predictions')
                try:
                  print('parameters used : ', train_len, test_len)
                  low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
                except:
                    print('ARIMA error, skipping to next MA type')
                    continue
                Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
                final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
                mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
                rmse_ftr = mse_ftr ** 0.5
                mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
                mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

                final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
                mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
                rmse = mse ** 0.5
                mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                # Generate prediction accuracy
                actual = df['close'].tail(test_len).values
                result_1 = []
                result_2 = []
                for i in range(1, len(final_prediction)):
                    # Compare prediction to previous close price
                    if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                        result_1.append(1)
                    elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                        result_1.append(1)
                    else:
                        result_1.append(0)

                    # Compare prediction to previous prediction
                    if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                        result_2.append(1)
                    elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                        result_2.append(1)
                    else:
                        result_2.append(0)

                accuracy_1 = np.mean(result_1)
                accuracy_2 = np.mean(result_2)

                simulation8[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                              'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                  'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                              'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                  'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                              'rmse': rmse_ftr, 'mae' : mae_ftr},
                                  'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                            'rmse': rmse, 'mae': mae },
                                  'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

                # save simulation data here as checkpoint
                with open('simulation8_data.json', 'w') as fp:
                    json.dump(simulation8, fp)

                for ma in simulation8.keys():
                    print('\n' + ma)
                    print('Prediction vs Close:\t\t' + str(round(100*simulation8[ma]['accuracy']['prediction vs close'], 2))
                          + '% Accuracy')
                    print('Prediction vs Prediction:\t' + str(round(100*simulation8[ma]['accuracy']['prediction vs prediction'], 2))
                          + '% Accuracy')
                    print('MSE:\t', simulation8[ma]['final']['mse'],
                          '\nRMSE:\t', simulation8[ma]['final']['rmse'],
                          '\nMAPE:\t', simulation8[ma]['final']['mae'])#,
                          # '\nMAPE:\t', simulation[ma]['final']['mape'])
              # else:
              #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/3600)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-15057.252, Time=4.91 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-13616.841, Time=2.91 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15177.809, Time=11.14 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14725.568, Time=12.35 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-15511.840, Time=16.89 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-15663.563, Time=16.54 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-15093.498, Time=7.85 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15194.504, Time=11.47 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=-14885.340, Time=20.83 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 104.921 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood                7855.782
Date:                Sun, 12 Dec 2021   AIC                         -15663.563
Time:                        23:48:54   BIC                         -15550.983
Sample:                             0   HQIC                        -15620.328
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -1.202e-05   4.78e-05     -0.251      0.801      -0.000    8.17e-05
x2         -1.202e-05   2.63e-05     -0.458      0.647   -6.35e-05    3.95e-05
x3          -1.21e-05      0.000     -0.118      0.906      -0.000       0.000
x4             1.0000   3.59e-05   2.79e+04      0.000       1.000       1.000
x5         -1.149e-05   3.47e-05     -0.332      0.740   -7.94e-05    5.65e-05
x6         -1.354e-05   2.94e-05     -0.461      0.645   -7.11e-05     4.4e-05
x7         -1.198e-05   3.25e-06     -3.693      0.000   -1.83e-05   -5.62e-06
x8             0.0027   9.17e-06    293.847      0.000       0.003       0.003
x9         -8.458e-07      0.000     -0.006      0.995      -0.000       0.000
x10            0.0005      0.000      1.213      0.225      -0.000       0.001
x11           -0.0027   4.93e-05    -54.454      0.000      -0.003      -0.003
x12            0.0007   3.53e-05     19.122      0.000       0.001       0.001
x13        -1.207e-05   2.16e-05     -0.559      0.576   -5.44e-05    3.03e-05
x14        -3.571e-05   1.38e-05     -2.581      0.010   -6.28e-05   -8.59e-06
x15        -1.308e-05   2.71e-06     -4.820      0.000   -1.84e-05   -7.76e-06
x16         -1.12e-05   4.71e-05     -0.238      0.812      -0.000    8.11e-05
x17        -1.059e-05   1.48e-05     -0.715      0.474   -3.96e-05    1.84e-05
x18         -2.03e-05   5.97e-05     -0.340      0.734      -0.000    9.68e-05
x19        -1.389e-05   3.69e-05     -0.376      0.707   -8.63e-05    5.85e-05
x20         2.105e-05      0.000      0.107      0.915      -0.000       0.000
ar.L1         -1.1996   4.09e-05  -2.93e+04      0.000      -1.200      -1.200
ar.L2         -0.8995   1.54e-05  -5.82e+04      0.000      -0.900      -0.899
ar.L3         -0.3999   1.46e-05  -2.74e+04      0.000      -0.400      -0.400
sigma2      2.425e-10   7.55e-11      3.213      0.001    9.46e-11     3.9e-10
===================================================================================
Ljung-Box (L1) (Q):                  14.46   Jarque-Bera (JB):           2454147.19
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            -3.95
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.38
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.88e+20. Standard errors may be unstable.
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05045, saving model to LSTM8.h5
48/48 - 5s - loss: 1.4181 - val_loss: 0.0505 - lr: 0.0010 - 5s/epoch - 103ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.3767 - val_loss: 0.0536 - lr: 0.0010 - 334ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.3257 - val_loss: 0.0574 - lr: 0.0010 - 347ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.2662 - val_loss: 0.0617 - lr: 0.0010 - 362ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.2063 - val_loss: 0.0668 - lr: 0.0010 - 340ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.1518 - val_loss: 0.0725 - lr: 0.0010 - 361ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.1205 - val_loss: 0.0731 - lr: 1.0000e-04 - 349ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.1158 - val_loss: 0.0737 - lr: 1.0000e-04 - 340ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.1112 - val_loss: 0.0743 - lr: 1.0000e-04 - 352ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.1067 - val_loss: 0.0749 - lr: 1.0000e-04 - 345ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.1022 - val_loss: 0.0756 - lr: 1.0000e-04 - 338ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0994 - val_loss: 0.0756 - lr: 1.0000e-05 - 370ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0990 - val_loss: 0.0757 - lr: 1.0000e-05 - 335ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0985 - val_loss: 0.0757 - lr: 1.0000e-05 - 347ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0981 - val_loss: 0.0758 - lr: 1.0000e-05 - 390ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0977 - val_loss: 0.0759 - lr: 1.0000e-05 - 344ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0972 - val_loss: 0.0759 - lr: 1.0000e-05 - 343ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0968 - val_loss: 0.0760 - lr: 1.0000e-05 - 357ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0963 - val_loss: 0.0761 - lr: 1.0000e-05 - 342ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0959 - val_loss: 0.0762 - lr: 1.0000e-05 - 351ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0955 - val_loss: 0.0762 - lr: 1.0000e-05 - 354ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0950 - val_loss: 0.0763 - lr: 1.0000e-05 - 337ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0946 - val_loss: 0.0764 - lr: 1.0000e-05 - 357ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0941 - val_loss: 0.0764 - lr: 1.0000e-05 - 358ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0937 - val_loss: 0.0765 - lr: 1.0000e-05 - 348ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0932 - val_loss: 0.0766 - lr: 1.0000e-05 - 348ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0928 - val_loss: 0.0766 - lr: 1.0000e-05 - 354ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0924 - val_loss: 0.0767 - lr: 1.0000e-05 - 334ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0919 - val_loss: 0.0768 - lr: 1.0000e-05 - 351ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0915 - val_loss: 0.0769 - lr: 1.0000e-05 - 342ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0910 - val_loss: 0.0769 - lr: 1.0000e-05 - 335ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0906 - val_loss: 0.0770 - lr: 1.0000e-05 - 350ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0902 - val_loss: 0.0771 - lr: 1.0000e-05 - 361ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0897 - val_loss: 0.0771 - lr: 1.0000e-05 - 329ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0893 - val_loss: 0.0772 - lr: 1.0000e-05 - 343ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0888 - val_loss: 0.0773 - lr: 1.0000e-05 - 338ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0884 - val_loss: 0.0774 - lr: 1.0000e-05 - 347ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0880 - val_loss: 0.0774 - lr: 1.0000e-05 - 357ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0875 - val_loss: 0.0775 - lr: 1.0000e-05 - 344ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0871 - val_loss: 0.0776 - lr: 1.0000e-05 - 341ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0866 - val_loss: 0.0777 - lr: 1.0000e-05 - 350ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0862 - val_loss: 0.0777 - lr: 1.0000e-05 - 341ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0858 - val_loss: 0.0778 - lr: 1.0000e-05 - 348ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0853 - val_loss: 0.0779 - lr: 1.0000e-05 - 344ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0849 - val_loss: 0.0780 - lr: 1.0000e-05 - 344ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0844 - val_loss: 0.0780 - lr: 1.0000e-05 - 354ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0840 - val_loss: 0.0781 - lr: 1.0000e-05 - 339ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0836 - val_loss: 0.0782 - lr: 1.0000e-05 - 336ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0831 - val_loss: 0.0783 - lr: 1.0000e-05 - 365ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0827 - val_loss: 0.0783 - lr: 1.0000e-05 - 366ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.05045
48/48 - 0s - loss: 1.0823 - val_loss: 0.0784 - lr: 1.0000e-05 - 356ms/epoch - 7ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.581571013749205 
RMSE:	 4.8560859767665985 
MAPE:	 3.8329222497365705
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17007.807, Time=3.79 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14576.593, Time=5.07 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15585.734, Time=9.71 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14574.593, Time=7.87 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15458.426, Time=12.12 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15621.247, Time=13.61 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-17231.605, Time=21.61 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14570.593, Time=10.33 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-16761.093, Time=17.56 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-13173.936, Time=33.35 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 135.024 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8638.803
Date:                Sun, 12 Dec 2021   AIC                         -17231.605
Time:                        23:54:18   BIC                         -17123.716
Sample:                             0   HQIC                        -17190.171
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -5.101e-09   4.36e-05     -0.000      1.000   -8.54e-05    8.54e-05
x2         -5.085e-09   4.35e-05     -0.000      1.000   -8.53e-05    8.53e-05
x3          -5.12e-09   4.36e-05     -0.000      1.000   -8.56e-05    8.55e-05
x4             1.0000   4.36e-05   2.29e+04      0.000       1.000       1.000
x5         -4.635e-09   4.15e-05     -0.000      1.000   -8.14e-05    8.14e-05
x6         -1.766e-08   7.54e-05     -0.000      1.000      -0.000       0.000
x7         -5.054e-09   4.34e-05     -0.000      1.000    -8.5e-05     8.5e-05
x8         -4.941e-09   4.29e-05     -0.000      1.000   -8.41e-05    8.41e-05
x9         -3.138e-10   8.71e-06   -3.6e-05      1.000   -1.71e-05    1.71e-05
x10        -1.002e-09   1.85e-05  -5.41e-05      1.000   -3.63e-05    3.63e-05
x11        -4.879e-09   4.26e-05     -0.000      1.000   -8.36e-05    8.36e-05
x12        -4.991e-09   4.31e-05     -0.000      1.000   -8.46e-05    8.45e-05
x13        -5.099e-09   4.36e-05     -0.000      1.000   -8.54e-05    8.54e-05
x14        -3.925e-08      0.000     -0.000      1.000      -0.000       0.000
x15        -4.597e-09   4.13e-05     -0.000      1.000    -8.1e-05     8.1e-05
x16        -1.164e-08    6.6e-05     -0.000      1.000      -0.000       0.000
x17        -4.702e-09   4.19e-05     -0.000      1.000   -8.22e-05    8.22e-05
x18        -8.297e-10   1.65e-05  -5.02e-05      1.000   -3.24e-05    3.24e-05
x19        -5.725e-09   4.61e-05     -0.000      1.000   -9.04e-05    9.04e-05
x20        -5.511e-09   4.28e-05     -0.000      1.000    -8.4e-05    8.39e-05
ma.L1         -1.3891   1.96e-08  -7.08e+07      0.000      -1.389      -1.389
ma.L2          0.4027   2.02e-08   1.99e+07      0.000       0.403       0.403
sigma2      7.547e-11   6.92e-11      1.091      0.275   -6.01e-11    2.11e-10
===================================================================================
Ljung-Box (L1) (Q):                  67.97   Jarque-Bera (JB):           6306943.47
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            12.31
Prob(H) (two-sided):                  0.00   Kurtosis:                       435.93
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.3e+24. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04663, saving model to LSTM8.h5
16/16 - 5s - loss: 1.3950 - val_loss: 0.0466 - lr: 0.0010 - 5s/epoch - 292ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.04663 to 0.04514, saving model to LSTM8.h5
16/16 - 0s - loss: 1.3529 - val_loss: 0.0451 - lr: 0.0010 - 151ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.04514 to 0.04452, saving model to LSTM8.h5
16/16 - 0s - loss: 1.3046 - val_loss: 0.0445 - lr: 0.0010 - 169ms/epoch - 11ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.2534 - val_loss: 0.0448 - lr: 0.0010 - 136ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.2042 - val_loss: 0.0456 - lr: 0.0010 - 136ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.1598 - val_loss: 0.0469 - lr: 0.0010 - 140ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.1206 - val_loss: 0.0484 - lr: 0.0010 - 143ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00008: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0858 - val_loss: 0.0501 - lr: 0.0010 - 131ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0653 - val_loss: 0.0503 - lr: 1.0000e-04 - 152ms/epoch - 9ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0624 - val_loss: 0.0504 - lr: 1.0000e-04 - 132ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0595 - val_loss: 0.0506 - lr: 1.0000e-04 - 124ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0567 - val_loss: 0.0508 - lr: 1.0000e-04 - 149ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00013: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0538 - val_loss: 0.0510 - lr: 1.0000e-04 - 137ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0520 - val_loss: 0.0510 - lr: 1.0000e-05 - 138ms/epoch - 9ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0518 - val_loss: 0.0511 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0515 - val_loss: 0.0511 - lr: 1.0000e-05 - 138ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0512 - val_loss: 0.0511 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00018: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0509 - val_loss: 0.0511 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0506 - val_loss: 0.0511 - lr: 1.0000e-05 - 181ms/epoch - 11ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0504 - val_loss: 0.0512 - lr: 1.0000e-05 - 144ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0501 - val_loss: 0.0512 - lr: 1.0000e-05 - 142ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0498 - val_loss: 0.0512 - lr: 1.0000e-05 - 137ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0495 - val_loss: 0.0512 - lr: 1.0000e-05 - 140ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0492 - val_loss: 0.0513 - lr: 1.0000e-05 - 144ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0490 - val_loss: 0.0513 - lr: 1.0000e-05 - 141ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0487 - val_loss: 0.0513 - lr: 1.0000e-05 - 140ms/epoch - 9ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0484 - val_loss: 0.0513 - lr: 1.0000e-05 - 139ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0481 - val_loss: 0.0513 - lr: 1.0000e-05 - 144ms/epoch - 9ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0478 - val_loss: 0.0514 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0475 - val_loss: 0.0514 - lr: 1.0000e-05 - 136ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0473 - val_loss: 0.0514 - lr: 1.0000e-05 - 179ms/epoch - 11ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0470 - val_loss: 0.0514 - lr: 1.0000e-05 - 152ms/epoch - 10ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0467 - val_loss: 0.0515 - lr: 1.0000e-05 - 141ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0464 - val_loss: 0.0515 - lr: 1.0000e-05 - 127ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0461 - val_loss: 0.0515 - lr: 1.0000e-05 - 152ms/epoch - 10ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0458 - val_loss: 0.0515 - lr: 1.0000e-05 - 127ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0456 - val_loss: 0.0516 - lr: 1.0000e-05 - 153ms/epoch - 10ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0453 - val_loss: 0.0516 - lr: 1.0000e-05 - 144ms/epoch - 9ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0450 - val_loss: 0.0516 - lr: 1.0000e-05 - 152ms/epoch - 10ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0447 - val_loss: 0.0516 - lr: 1.0000e-05 - 137ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0444 - val_loss: 0.0517 - lr: 1.0000e-05 - 139ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0441 - val_loss: 0.0517 - lr: 1.0000e-05 - 154ms/epoch - 10ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0438 - val_loss: 0.0517 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0436 - val_loss: 0.0517 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0433 - val_loss: 0.0518 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0430 - val_loss: 0.0518 - lr: 1.0000e-05 - 139ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0427 - val_loss: 0.0518 - lr: 1.0000e-05 - 139ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0424 - val_loss: 0.0518 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0421 - val_loss: 0.0519 - lr: 1.0000e-05 - 143ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0419 - val_loss: 0.0519 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0416 - val_loss: 0.0519 - lr: 1.0000e-05 - 141ms/epoch - 9ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0413 - val_loss: 0.0519 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.04452
16/16 - 0s - loss: 1.0410 - val_loss: 0.0520 - lr: 1.0000e-05 - 193ms/epoch - 12ms/step
Epoch 00053: early stopping
SMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.581571013749205 
RMSE:	 4.8560859767665985 
MAPE:	 3.8329222497365705

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 36.33111760049725 
RMSE:	 6.027529975080776 
MAPE:	 4.717669821139163
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-15462.744, Time=14.88 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-13144.103, Time=2.90 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16179.868, Time=7.19 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14670.350, Time=14.57 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-15643.233, Time=21.05 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15673.437, Time=18.26 sec
 ARIMA(1,3,0)(0,0,0)[0] intercept   : AIC=-15494.535, Time=8.25 sec

Best model:  ARIMA(1,3,0)(0,0,0)[0]          
Total fit time: 87.120 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(1, 3, 0)   Log Likelihood                8111.934
Date:                Mon, 13 Dec 2021   AIC                         -16179.868
Time:                        00:01:34   BIC                         -16076.670
Sample:                             0   HQIC                        -16140.236
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -1.474e-05      0.000     -0.048      0.961      -0.001       0.001
x2         -1.471e-05      0.000     -0.041      0.967      -0.001       0.001
x3         -1.475e-05      0.000     -0.072      0.943      -0.000       0.000
x4             1.0000      0.000   3644.383      0.000       0.999       1.001
x5         -1.405e-05      0.000     -0.051      0.960      -0.001       0.001
x6         -2.487e-05   4.39e-05     -0.567      0.571      -0.000    6.11e-05
x7         -1.467e-05      0.000     -0.134      0.893      -0.000       0.000
x8             0.0004      0.000      3.240      0.001       0.000       0.001
x9          3.739e-06      0.001      0.003      0.998      -0.003       0.003
x10           -0.0006      0.001     -0.447      0.655      -0.003       0.002
x11            0.0024   2.31e-05    105.301      0.000       0.002       0.002
x12           -0.0019      0.000     -7.274      0.000      -0.002      -0.001
x13        -1.473e-05      0.000     -0.113      0.910      -0.000       0.000
x14        -4.124e-05      0.000     -0.135      0.893      -0.001       0.001
x15        -1.347e-05      0.000     -0.095      0.924      -0.000       0.000
x16        -2.422e-05      0.000     -0.100      0.920      -0.000       0.000
x17        -1.471e-05      0.000     -0.112      0.911      -0.000       0.000
x18         2.884e-06      0.000      0.006      0.995      -0.001       0.001
x19        -1.493e-05      0.000     -0.105      0.916      -0.000       0.000
x20         3.469e-06      0.000      0.007      0.994      -0.001       0.001
ar.L1         -0.6665   6.84e-05  -9743.045      0.000      -0.667      -0.666
sigma2      1.498e-10   7.34e-11      2.042      0.041    6.03e-12    2.94e-10
===================================================================================
Ljung-Box (L1) (Q):                  89.34   Jarque-Bera (JB):           3270298.31
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.18
Prob(H) (two-sided):                  0.00   Kurtosis:                       315.08
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.61e+19. Standard errors may be unstable.
ARIMA order: (1, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04883, saving model to LSTM8.h5
17/17 - 5s - loss: 1.4285 - val_loss: 0.0488 - lr: 0.0010 - 5s/epoch - 306ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.4135 - val_loss: 0.0495 - lr: 0.0010 - 149ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3968 - val_loss: 0.0503 - lr: 0.0010 - 140ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3780 - val_loss: 0.0512 - lr: 0.0010 - 145ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3568 - val_loss: 0.0522 - lr: 0.0010 - 140ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3339 - val_loss: 0.0534 - lr: 0.0010 - 154ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3186 - val_loss: 0.0535 - lr: 1.0000e-04 - 142ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3163 - val_loss: 0.0536 - lr: 1.0000e-04 - 154ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3141 - val_loss: 0.0537 - lr: 1.0000e-04 - 145ms/epoch - 9ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3120 - val_loss: 0.0538 - lr: 1.0000e-04 - 131ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3099 - val_loss: 0.0539 - lr: 1.0000e-04 - 141ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3085 - val_loss: 0.0539 - lr: 1.0000e-05 - 155ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3083 - val_loss: 0.0540 - lr: 1.0000e-05 - 182ms/epoch - 11ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3081 - val_loss: 0.0540 - lr: 1.0000e-05 - 142ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3079 - val_loss: 0.0540 - lr: 1.0000e-05 - 156ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3077 - val_loss: 0.0540 - lr: 1.0000e-05 - 138ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3075 - val_loss: 0.0540 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3073 - val_loss: 0.0540 - lr: 1.0000e-05 - 137ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3071 - val_loss: 0.0540 - lr: 1.0000e-05 - 140ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3069 - val_loss: 0.0540 - lr: 1.0000e-05 - 139ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3067 - val_loss: 0.0541 - lr: 1.0000e-05 - 183ms/epoch - 11ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3065 - val_loss: 0.0541 - lr: 1.0000e-05 - 161ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3063 - val_loss: 0.0541 - lr: 1.0000e-05 - 141ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3061 - val_loss: 0.0541 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3059 - val_loss: 0.0541 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3057 - val_loss: 0.0541 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3055 - val_loss: 0.0541 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3053 - val_loss: 0.0541 - lr: 1.0000e-05 - 163ms/epoch - 10ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3051 - val_loss: 0.0541 - lr: 1.0000e-05 - 151ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3049 - val_loss: 0.0542 - lr: 1.0000e-05 - 143ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3047 - val_loss: 0.0542 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3045 - val_loss: 0.0542 - lr: 1.0000e-05 - 141ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3043 - val_loss: 0.0542 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3041 - val_loss: 0.0542 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3039 - val_loss: 0.0542 - lr: 1.0000e-05 - 164ms/epoch - 10ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3037 - val_loss: 0.0542 - lr: 1.0000e-05 - 142ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3035 - val_loss: 0.0542 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3033 - val_loss: 0.0543 - lr: 1.0000e-05 - 143ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3031 - val_loss: 0.0543 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3029 - val_loss: 0.0543 - lr: 1.0000e-05 - 155ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3027 - val_loss: 0.0543 - lr: 1.0000e-05 - 201ms/epoch - 12ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3025 - val_loss: 0.0543 - lr: 1.0000e-05 - 143ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3023 - val_loss: 0.0543 - lr: 1.0000e-05 - 139ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3021 - val_loss: 0.0543 - lr: 1.0000e-05 - 139ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3019 - val_loss: 0.0543 - lr: 1.0000e-05 - 141ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3017 - val_loss: 0.0544 - lr: 1.0000e-05 - 138ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3015 - val_loss: 0.0544 - lr: 1.0000e-05 - 141ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3013 - val_loss: 0.0544 - lr: 1.0000e-05 - 161ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3011 - val_loss: 0.0544 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3009 - val_loss: 0.0544 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04883
17/17 - 0s - loss: 1.3007 - val_loss: 0.0544 - lr: 1.0000e-05 - 139ms/epoch - 8ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.581571013749205 
RMSE:	 4.8560859767665985 
MAPE:	 3.8329222497365705

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 36.33111760049725 
RMSE:	 6.027529975080776 
MAPE:	 4.717669821139163

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 47.98959198497618 
RMSE:	 6.927452055768859 
MAPE:	 5.543762493289533
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17007.773, Time=3.52 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14576.593, Time=5.11 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16293.727, Time=8.63 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14574.593, Time=8.37 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16647.994, Time=11.16 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15621.952, Time=11.98 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16876.201, Time=12.28 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17032.019, Time=6.53 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17006.612, Time=3.74 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17089.440, Time=7.48 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=17.14 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17005.977, Time=4.02 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-17000.665, Time=4.81 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 104.791 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.720
Date:                Mon, 13 Dec 2021   AIC                         -17089.440
Time:                        00:04:26   BIC                         -16972.169
Sample:                             0   HQIC                        -17044.403
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.799e-10   1.36e-20  -2.06e+10      0.000    -2.8e-10    -2.8e-10
x2         -2.816e-10   1.37e-20  -2.06e+10      0.000   -2.82e-10   -2.82e-10
x3         -2.804e-10   1.36e-20  -2.06e+10      0.000    -2.8e-10    -2.8e-10
x4             1.0000   1.36e-20   7.33e+19      0.000       1.000       1.000
x5         -2.598e-10   1.31e-20  -1.98e+10      0.000    -2.6e-10    -2.6e-10
x6         -1.388e-09   2.97e-20  -4.67e+10      0.000   -1.39e-09   -1.39e-09
x7         -2.788e-10   1.36e-20  -2.05e+10      0.000   -2.79e-10   -2.79e-10
x8         -2.761e-10   1.35e-20  -2.04e+10      0.000   -2.76e-10   -2.76e-10
x9          -2.22e-12   3.36e-22  -6.61e+09      0.000   -2.22e-12   -2.22e-12
x10        -1.345e-10   9.36e-21  -1.44e+10      0.000   -1.34e-10   -1.34e-10
x11        -2.898e-10   1.38e-20  -2.09e+10      0.000    -2.9e-10    -2.9e-10
x12        -2.602e-10   1.31e-20  -1.98e+10      0.000    -2.6e-10    -2.6e-10
x13        -2.807e-10   1.36e-20  -2.06e+10      0.000   -2.81e-10   -2.81e-10
x14         -1.87e-09   3.52e-20  -5.31e+10      0.000   -1.87e-09   -1.87e-09
x15        -2.767e-10   1.37e-20  -2.03e+10      0.000   -2.77e-10   -2.77e-10
x16        -8.184e-11   7.33e-21  -1.12e+10      0.000   -8.18e-11   -8.18e-11
x17        -2.407e-10   1.27e-20   -1.9e+10      0.000   -2.41e-10   -2.41e-10
x18        -6.412e-10   2.06e-20  -3.11e+10      0.000   -6.41e-10   -6.41e-10
x19        -2.915e-10   1.39e-20   -2.1e+10      0.000   -2.92e-10   -2.92e-10
x20        -4.337e-10   1.69e-20  -2.56e+10      0.000   -4.34e-10   -4.34e-10
ar.L1         -0.4924   1.46e-22  -3.38e+21      0.000      -0.492      -0.492
ar.L2         -0.1923   8.47e-23  -2.27e+21      0.000      -0.192      -0.192
ar.L3         -0.0461   4.02e-23  -1.15e+21      0.000      -0.046      -0.046
ma.L1         -0.7078   3.31e-22  -2.14e+21      0.000      -0.708      -0.708
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  55.12   Jarque-Bera (JB):           4171061.36
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.27
Prob(H) (two-sided):                  0.00   Kurtosis:                       355.48
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.88e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04911, saving model to LSTM8.h5
10/10 - 5s - loss: 1.4512 - val_loss: 0.0491 - lr: 0.0010 - 5s/epoch - 503ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.4372 - val_loss: 0.0494 - lr: 0.0010 - 90ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.4239 - val_loss: 0.0497 - lr: 0.0010 - 96ms/epoch - 10ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.4107 - val_loss: 0.0500 - lr: 0.0010 - 95ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3978 - val_loss: 0.0502 - lr: 0.0010 - 105ms/epoch - 11ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3846 - val_loss: 0.0505 - lr: 0.0010 - 96ms/epoch - 10ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3749 - val_loss: 0.0506 - lr: 1.0000e-04 - 105ms/epoch - 10ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3735 - val_loss: 0.0506 - lr: 1.0000e-04 - 115ms/epoch - 11ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3720 - val_loss: 0.0506 - lr: 1.0000e-04 - 95ms/epoch - 10ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3706 - val_loss: 0.0506 - lr: 1.0000e-04 - 101ms/epoch - 10ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3691 - val_loss: 0.0507 - lr: 1.0000e-04 - 97ms/epoch - 10ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3681 - val_loss: 0.0507 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3680 - val_loss: 0.0507 - lr: 1.0000e-05 - 108ms/epoch - 11ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3678 - val_loss: 0.0507 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3677 - val_loss: 0.0507 - lr: 1.0000e-05 - 106ms/epoch - 11ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3675 - val_loss: 0.0507 - lr: 1.0000e-05 - 98ms/epoch - 10ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3674 - val_loss: 0.0507 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3673 - val_loss: 0.0507 - lr: 1.0000e-05 - 109ms/epoch - 11ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3671 - val_loss: 0.0507 - lr: 1.0000e-05 - 112ms/epoch - 11ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3670 - val_loss: 0.0507 - lr: 1.0000e-05 - 99ms/epoch - 10ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3668 - val_loss: 0.0507 - lr: 1.0000e-05 - 95ms/epoch - 10ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3667 - val_loss: 0.0507 - lr: 1.0000e-05 - 103ms/epoch - 10ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3665 - val_loss: 0.0507 - lr: 1.0000e-05 - 99ms/epoch - 10ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3664 - val_loss: 0.0507 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3663 - val_loss: 0.0507 - lr: 1.0000e-05 - 113ms/epoch - 11ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3661 - val_loss: 0.0507 - lr: 1.0000e-05 - 104ms/epoch - 10ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3660 - val_loss: 0.0507 - lr: 1.0000e-05 - 111ms/epoch - 11ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3658 - val_loss: 0.0507 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3657 - val_loss: 0.0507 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3656 - val_loss: 0.0507 - lr: 1.0000e-05 - 91ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3654 - val_loss: 0.0507 - lr: 1.0000e-05 - 99ms/epoch - 10ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3653 - val_loss: 0.0507 - lr: 1.0000e-05 - 94ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3651 - val_loss: 0.0507 - lr: 1.0000e-05 - 104ms/epoch - 10ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3650 - val_loss: 0.0507 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3648 - val_loss: 0.0507 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3647 - val_loss: 0.0507 - lr: 1.0000e-05 - 107ms/epoch - 11ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3646 - val_loss: 0.0507 - lr: 1.0000e-05 - 109ms/epoch - 11ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3644 - val_loss: 0.0508 - lr: 1.0000e-05 - 94ms/epoch - 9ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3643 - val_loss: 0.0508 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3641 - val_loss: 0.0508 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3640 - val_loss: 0.0508 - lr: 1.0000e-05 - 106ms/epoch - 11ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3639 - val_loss: 0.0508 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3637 - val_loss: 0.0508 - lr: 1.0000e-05 - 150ms/epoch - 15ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3636 - val_loss: 0.0508 - lr: 1.0000e-05 - 107ms/epoch - 11ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3634 - val_loss: 0.0508 - lr: 1.0000e-05 - 112ms/epoch - 11ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3633 - val_loss: 0.0508 - lr: 1.0000e-05 - 109ms/epoch - 11ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3631 - val_loss: 0.0508 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3630 - val_loss: 0.0508 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3629 - val_loss: 0.0508 - lr: 1.0000e-05 - 91ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3627 - val_loss: 0.0508 - lr: 1.0000e-05 - 91ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04911
10/10 - 0s - loss: 1.3626 - val_loss: 0.0508 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.581571013749205 
RMSE:	 4.8560859767665985 
MAPE:	 3.8329222497365705

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 36.33111760049725 
RMSE:	 6.027529975080776 
MAPE:	 4.717669821139163

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 47.98959198497618 
RMSE:	 6.927452055768859 
MAPE:	 5.543762493289533

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 147.1351003665105 
RMSE:	 12.129925818673026 
MAPE:	 10.86165140779962
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17007.733, Time=3.51 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14576.593, Time=5.09 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16469.294, Time=9.35 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14574.593, Time=7.71 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16346.513, Time=10.21 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16569.862, Time=12.29 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16356.870, Time=18.09 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17033.457, Time=6.54 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17006.582, Time=3.70 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17089.434, Time=8.08 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=-15789.397, Time=13.90 sec
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-15386.395, Time=25.39 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=47.433, Time=7.46 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 131.339 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.717
Date:                Mon, 13 Dec 2021   AIC                         -17089.434
Time:                        00:18:21   BIC                         -16972.163
Sample:                             0   HQIC                        -17044.397
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.222e-10   9.26e-21   -2.4e+10      0.000   -2.22e-10   -2.22e-10
x2         -2.175e-10   9.18e-21  -2.37e+10      0.000   -2.18e-10   -2.18e-10
x3         -2.088e-10   8.98e-21  -2.33e+10      0.000   -2.09e-10   -2.09e-10
x4             1.0000   9.08e-21    1.1e+20      0.000       1.000       1.000
x5         -1.927e-10   8.64e-21  -2.23e+10      0.000   -1.93e-10   -1.93e-10
x6          -1.33e-09   2.17e-20  -6.14e+10      0.000   -1.33e-09   -1.33e-09
x7         -2.053e-10   8.93e-21   -2.3e+10      0.000   -2.05e-10   -2.05e-10
x8         -1.999e-10   8.84e-21  -2.26e+10      0.000      -2e-10      -2e-10
x9           -3.6e-11   1.09e-21  -3.29e+10      0.000    -3.6e-11    -3.6e-11
x10        -9.188e-11   3.87e-21  -2.37e+10      0.000   -9.19e-11   -9.19e-11
x11        -2.014e-10   8.86e-21  -2.27e+10      0.000   -2.01e-10   -2.01e-10
x12        -1.994e-10   8.77e-21  -2.27e+10      0.000   -1.99e-10   -1.99e-10
x13        -2.115e-10   9.05e-21  -2.34e+10      0.000   -2.12e-10   -2.12e-10
x14        -1.723e-09    2.6e-20  -6.63e+10      0.000   -1.72e-09   -1.72e-09
x15        -2.116e-10    9.1e-21  -2.33e+10      0.000   -2.12e-10   -2.12e-10
x16        -3.169e-10   1.11e-20  -2.85e+10      0.000   -3.17e-10   -3.17e-10
x17        -1.804e-10    8.4e-21  -2.15e+10      0.000    -1.8e-10    -1.8e-10
x18        -1.463e-10   7.54e-21  -1.94e+10      0.000   -1.46e-10   -1.46e-10
x19        -2.598e-10   1.01e-20  -2.58e+10      0.000    -2.6e-10    -2.6e-10
x20        -3.922e-10   1.24e-20  -3.18e+10      0.000   -3.92e-10   -3.92e-10
ar.L1         -0.4926   1.44e-22  -3.42e+21      0.000      -0.493      -0.493
ar.L2         -0.1937    8.6e-23  -2.25e+21      0.000      -0.194      -0.194
ar.L3         -0.0441   3.86e-23  -1.14e+21      0.000      -0.044      -0.044
ma.L1         -0.7085    3.3e-22  -2.15e+21      0.000      -0.709      -0.709
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  57.24   Jarque-Bera (JB):           3956070.89
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                             5.16
Prob(H) (two-sided):                  0.00   Kurtosis:                       346.28
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 5.5e+39. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04594, saving model to LSTM8.h5
45/45 - 5s - loss: 1.3817 - val_loss: 0.0459 - lr: 0.0010 - 5s/epoch - 109ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.3312 - val_loss: 0.0471 - lr: 0.0010 - 330ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.2739 - val_loss: 0.0496 - lr: 0.0010 - 350ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.2123 - val_loss: 0.0559 - lr: 0.0010 - 353ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.1454 - val_loss: 0.0621 - lr: 0.0010 - 349ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0835 - val_loss: 0.0674 - lr: 0.0010 - 331ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0508 - val_loss: 0.0680 - lr: 1.0000e-04 - 332ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0462 - val_loss: 0.0685 - lr: 1.0000e-04 - 329ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0416 - val_loss: 0.0690 - lr: 1.0000e-04 - 335ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0372 - val_loss: 0.0696 - lr: 1.0000e-04 - 345ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0328 - val_loss: 0.0701 - lr: 1.0000e-04 - 325ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0301 - val_loss: 0.0702 - lr: 1.0000e-05 - 356ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0297 - val_loss: 0.0702 - lr: 1.0000e-05 - 326ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0293 - val_loss: 0.0703 - lr: 1.0000e-05 - 330ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0288 - val_loss: 0.0704 - lr: 1.0000e-05 - 336ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0284 - val_loss: 0.0704 - lr: 1.0000e-05 - 339ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0280 - val_loss: 0.0705 - lr: 1.0000e-05 - 331ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0275 - val_loss: 0.0705 - lr: 1.0000e-05 - 341ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0271 - val_loss: 0.0706 - lr: 1.0000e-05 - 348ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0267 - val_loss: 0.0707 - lr: 1.0000e-05 - 355ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0262 - val_loss: 0.0707 - lr: 1.0000e-05 - 320ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0258 - val_loss: 0.0708 - lr: 1.0000e-05 - 321ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0254 - val_loss: 0.0709 - lr: 1.0000e-05 - 327ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0249 - val_loss: 0.0709 - lr: 1.0000e-05 - 324ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0245 - val_loss: 0.0710 - lr: 1.0000e-05 - 325ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0241 - val_loss: 0.0711 - lr: 1.0000e-05 - 328ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0236 - val_loss: 0.0711 - lr: 1.0000e-05 - 333ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0232 - val_loss: 0.0712 - lr: 1.0000e-05 - 343ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0228 - val_loss: 0.0713 - lr: 1.0000e-05 - 331ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0223 - val_loss: 0.0713 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0219 - val_loss: 0.0714 - lr: 1.0000e-05 - 328ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0215 - val_loss: 0.0715 - lr: 1.0000e-05 - 327ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0210 - val_loss: 0.0715 - lr: 1.0000e-05 - 333ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0206 - val_loss: 0.0716 - lr: 1.0000e-05 - 344ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0202 - val_loss: 0.0717 - lr: 1.0000e-05 - 325ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0197 - val_loss: 0.0718 - lr: 1.0000e-05 - 326ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0193 - val_loss: 0.0718 - lr: 1.0000e-05 - 343ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0189 - val_loss: 0.0719 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0184 - val_loss: 0.0720 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0180 - val_loss: 0.0720 - lr: 1.0000e-05 - 329ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0176 - val_loss: 0.0721 - lr: 1.0000e-05 - 319ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0171 - val_loss: 0.0722 - lr: 1.0000e-05 - 331ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0167 - val_loss: 0.0723 - lr: 1.0000e-05 - 354ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0163 - val_loss: 0.0723 - lr: 1.0000e-05 - 332ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0159 - val_loss: 0.0724 - lr: 1.0000e-05 - 338ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0154 - val_loss: 0.0725 - lr: 1.0000e-05 - 335ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0150 - val_loss: 0.0726 - lr: 1.0000e-05 - 342ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0146 - val_loss: 0.0726 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0141 - val_loss: 0.0727 - lr: 1.0000e-05 - 328ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0137 - val_loss: 0.0728 - lr: 1.0000e-05 - 331ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04594
45/45 - 0s - loss: 1.0133 - val_loss: 0.0729 - lr: 1.0000e-05 - 327ms/epoch - 7ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.581571013749205 
RMSE:	 4.8560859767665985 
MAPE:	 3.8329222497365705

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 36.33111760049725 
RMSE:	 6.027529975080776 
MAPE:	 4.717669821139163

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 47.98959198497618 
RMSE:	 6.927452055768859 
MAPE:	 5.543762493289533

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 147.1351003665105 
RMSE:	 12.129925818673026 
MAPE:	 10.86165140779962

KAMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	39.93% Accuracy
MSE:	 23.174387847088422 
RMSE:	 4.8139783804134835 
MAPE:	 3.8527324736855544
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.792, Time=3.70 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14576.592, Time=5.08 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16618.742, Time=8.58 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14574.592, Time=8.13 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-17004.301, Time=3.90 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15715.779, Time=22.53 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=inf, Time=3.87 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17007.442, Time=4.04 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17188.392, Time=17.09 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17002.377, Time=4.13 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=-16356.269, Time=15.38 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 96.474 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood                8618.196
Date:                Mon, 13 Dec 2021   AIC                         -17188.392
Time:                        00:31:27   BIC                         -17075.812
Sample:                             0   HQIC                        -17145.157
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -3.582e-10   2.18e-20  -1.64e+10      0.000   -3.58e-10   -3.58e-10
x2         -3.575e-10   2.25e-20  -1.59e+10      0.000   -3.57e-10   -3.57e-10
x3         -3.653e-10   2.09e-20  -1.75e+10      0.000   -3.65e-10   -3.65e-10
x4             1.0000   2.18e-20   4.59e+19      0.000       1.000       1.000
x5         -3.252e-10   2.07e-20  -1.57e+10      0.000   -3.25e-10   -3.25e-10
x6         -7.157e-09   1.78e-19  -4.03e+10      0.000   -7.16e-09   -7.16e-09
x7          -3.29e-10   2.09e-20  -1.58e+10      0.000   -3.29e-10   -3.29e-10
x8          -3.28e-10   2.12e-20  -1.54e+10      0.000   -3.28e-10   -3.28e-10
x9         -1.775e-10   1.29e-21  -1.37e+11      0.000   -1.77e-10   -1.77e-10
x10         -2.94e-10    5.5e-21  -5.34e+10      0.000   -2.94e-10   -2.94e-10
x11        -3.247e-10   2.11e-20  -1.54e+10      0.000   -3.25e-10   -3.25e-10
x12        -3.357e-10   2.11e-20  -1.59e+10      0.000   -3.36e-10   -3.36e-10
x13         -3.46e-10   2.14e-20  -1.62e+10      0.000   -3.46e-10   -3.46e-10
x14        -2.825e-09   6.25e-20  -4.52e+10      0.000   -2.82e-09   -2.82e-09
x15        -3.957e-10   2.33e-20  -1.69e+10      0.000   -3.96e-10   -3.96e-10
x16        -2.548e-10   1.87e-20  -1.36e+10      0.000   -2.55e-10   -2.55e-10
x17        -2.495e-10   1.85e-20  -1.35e+10      0.000   -2.49e-10   -2.49e-10
x18        -1.073e-09   3.84e-20  -2.79e+10      0.000   -1.07e-09   -1.07e-09
x19        -4.343e-10   2.45e-20  -1.78e+10      0.000   -4.34e-10   -4.34e-10
x20        -1.047e-09   3.78e-20  -2.77e+10      0.000   -1.05e-09   -1.05e-09
ar.L1         -1.2157   8.99e-23  -1.35e+22      0.000      -1.216      -1.216
ar.L2         -0.9187   9.81e-23  -9.36e+21      0.000      -0.919      -0.919
ar.L3         -0.4095   9.98e-23   -4.1e+21      0.000      -0.409      -0.409
sigma2      7.969e-11   6.92e-11      1.151      0.250    -5.6e-11    2.15e-10
===================================================================================
Ljung-Box (L1) (Q):                   2.47   Jarque-Bera (JB):             15463.35
Prob(Q):                              0.12   Prob(JB):                         0.00
Heteroskedasticity (H):               0.35   Skew:                             0.62
Prob(H) (two-sided):                  0.00   Kurtosis:                        24.44
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.74e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04465, saving model to LSTM8.h5
58/58 - 6s - loss: 1.3478 - val_loss: 0.0446 - lr: 0.0010 - 6s/epoch - 95ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04465
58/58 - 0s - loss: 1.2302 - val_loss: 0.0524 - lr: 0.0010 - 439ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04465
58/58 - 0s - loss: 1.1241 - val_loss: 0.0574 - lr: 0.0010 - 405ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.9970 - val_loss: 0.0595 - lr: 0.0010 - 431ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.8717 - val_loss: 0.0643 - lr: 0.0010 - 424ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.8005 - val_loss: 0.0693 - lr: 0.0010 - 430ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7714 - val_loss: 0.0698 - lr: 1.0000e-04 - 411ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7676 - val_loss: 0.0703 - lr: 1.0000e-04 - 422ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7638 - val_loss: 0.0709 - lr: 1.0000e-04 - 425ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7601 - val_loss: 0.0715 - lr: 1.0000e-04 - 400ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7564 - val_loss: 0.0721 - lr: 1.0000e-04 - 416ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7542 - val_loss: 0.0721 - lr: 1.0000e-05 - 413ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7538 - val_loss: 0.0722 - lr: 1.0000e-05 - 423ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7534 - val_loss: 0.0723 - lr: 1.0000e-05 - 412ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7531 - val_loss: 0.0723 - lr: 1.0000e-05 - 414ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7527 - val_loss: 0.0724 - lr: 1.0000e-05 - 434ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7523 - val_loss: 0.0725 - lr: 1.0000e-05 - 418ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7519 - val_loss: 0.0726 - lr: 1.0000e-05 - 416ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7515 - val_loss: 0.0726 - lr: 1.0000e-05 - 410ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7512 - val_loss: 0.0727 - lr: 1.0000e-05 - 433ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7508 - val_loss: 0.0728 - lr: 1.0000e-05 - 408ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7504 - val_loss: 0.0729 - lr: 1.0000e-05 - 414ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7500 - val_loss: 0.0730 - lr: 1.0000e-05 - 426ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7496 - val_loss: 0.0731 - lr: 1.0000e-05 - 415ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7492 - val_loss: 0.0732 - lr: 1.0000e-05 - 421ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7488 - val_loss: 0.0732 - lr: 1.0000e-05 - 399ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7484 - val_loss: 0.0733 - lr: 1.0000e-05 - 426ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7480 - val_loss: 0.0734 - lr: 1.0000e-05 - 422ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7476 - val_loss: 0.0735 - lr: 1.0000e-05 - 417ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7471 - val_loss: 0.0736 - lr: 1.0000e-05 - 418ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7467 - val_loss: 0.0737 - lr: 1.0000e-05 - 407ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7463 - val_loss: 0.0738 - lr: 1.0000e-05 - 414ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7459 - val_loss: 0.0739 - lr: 1.0000e-05 - 413ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7455 - val_loss: 0.0740 - lr: 1.0000e-05 - 414ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7451 - val_loss: 0.0741 - lr: 1.0000e-05 - 420ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7447 - val_loss: 0.0742 - lr: 1.0000e-05 - 400ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7443 - val_loss: 0.0744 - lr: 1.0000e-05 - 418ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7439 - val_loss: 0.0745 - lr: 1.0000e-05 - 409ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7435 - val_loss: 0.0746 - lr: 1.0000e-05 - 409ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7431 - val_loss: 0.0747 - lr: 1.0000e-05 - 406ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7426 - val_loss: 0.0748 - lr: 1.0000e-05 - 401ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7422 - val_loss: 0.0749 - lr: 1.0000e-05 - 416ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7418 - val_loss: 0.0750 - lr: 1.0000e-05 - 397ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7414 - val_loss: 0.0751 - lr: 1.0000e-05 - 412ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7410 - val_loss: 0.0753 - lr: 1.0000e-05 - 411ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7406 - val_loss: 0.0754 - lr: 1.0000e-05 - 405ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7402 - val_loss: 0.0755 - lr: 1.0000e-05 - 407ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7398 - val_loss: 0.0756 - lr: 1.0000e-05 - 402ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7394 - val_loss: 0.0757 - lr: 1.0000e-05 - 408ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7390 - val_loss: 0.0758 - lr: 1.0000e-05 - 418ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04465
58/58 - 0s - loss: 0.7386 - val_loss: 0.0760 - lr: 1.0000e-05 - 421ms/epoch - 7ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.581571013749205 
RMSE:	 4.8560859767665985 
MAPE:	 3.8329222497365705

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 36.33111760049725 
RMSE:	 6.027529975080776 
MAPE:	 4.717669821139163

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 47.98959198497618 
RMSE:	 6.927452055768859 
MAPE:	 5.543762493289533

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 147.1351003665105 
RMSE:	 12.129925818673026 
MAPE:	 10.86165140779962

KAMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	39.93% Accuracy
MSE:	 23.174387847088422 
RMSE:	 4.8139783804134835 
MAPE:	 3.8527324736855544

MIDPOINT
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 18.693186732823552 
RMSE:	 4.323561810917424 
MAPE:	 3.393872875098524
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17007.439, Time=3.44 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-13714.163, Time=6.05 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-14620.288, Time=5.46 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-16512.116, Time=12.81 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-17085.548, Time=10.46 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17009.877, Time=3.76 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17089.740, Time=7.83 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17006.211, Time=3.80 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=-17349.997, Time=18.65 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17006.024, Time=4.30 sec
 ARIMA(3,3,3)(0,0,0)[0]             : AIC=-14720.521, Time=14.16 sec
 ARIMA(2,3,3)(0,0,0)[0]             : AIC=-16599.516, Time=14.86 sec
 ARIMA(3,3,2)(0,0,0)[0] intercept   : AIC=-13110.324, Time=19.08 sec

Best model:  ARIMA(3,3,2)(0,0,0)[0]          
Total fit time: 124.685 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 2)   Log Likelihood                8700.998
Date:                Mon, 13 Dec 2021   AIC                         -17349.997
Time:                        00:36:37   BIC                         -17228.035
Sample:                             0   HQIC                        -17303.158
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          4.251e-09   2.48e-05      0.000      1.000   -4.85e-05    4.85e-05
x2          4.257e-09   2.48e-05      0.000      1.000   -4.86e-05    4.87e-05
x3          4.244e-09   2.34e-05      0.000      1.000   -4.58e-05    4.58e-05
x4             1.0000   2.37e-05   4.23e+04      0.000       1.000       1.000
x5          4.344e-09   2.35e-05      0.000      1.000    -4.6e-05     4.6e-05
x6          3.064e-09   6.26e-05   4.89e-05      1.000      -0.000       0.000
x7           4.26e-09   3.09e-05      0.000      1.000   -6.05e-05    6.05e-05
x8            -0.0001   4.28e-05     -2.782      0.005      -0.000   -3.51e-05
x9         -3.943e-09   4.01e-06     -0.001      0.999   -7.86e-06    7.85e-06
x10        -1.431e-05    9.6e-05     -0.149      0.881      -0.000       0.000
x11            0.0001   3.13e-05      3.693      0.000    5.42e-05       0.000
x12         1.616e-06   5.46e-05      0.030      0.976      -0.000       0.000
x13         4.247e-09   2.49e-05      0.000      1.000   -4.87e-05    4.87e-05
x14        -1.778e-08   5.56e-05     -0.000      1.000      -0.000       0.000
x15         4.488e-09      3e-05      0.000      1.000   -5.88e-05    5.88e-05
x16        -6.718e-09   4.66e-05     -0.000      1.000   -9.13e-05    9.13e-05
x17         3.935e-09    8.3e-06      0.000      1.000   -1.63e-05    1.63e-05
x18        -2.742e-08      0.000     -0.000      1.000      -0.000       0.000
x19         4.464e-09   4.48e-05   9.97e-05      1.000   -8.78e-05    8.78e-05
x20          4.06e-09      0.000   8.55e-06      1.000      -0.001       0.001
ar.L1         -1.2437   2.38e-08  -5.23e+07      0.000      -1.244      -1.244
ar.L2         -0.5344   9.34e-09  -5.72e+07      0.000      -0.534      -0.534
ar.L3         -0.1491   9.43e-10  -1.58e+08      0.000      -0.149      -0.149
ma.L1         -0.2521   9.13e-09  -2.76e+07      0.000      -0.252      -0.252
ma.L2         -0.7294   1.95e-08  -3.75e+07      0.000      -0.729      -0.729
sigma2      6.455e-11   6.89e-11      0.937      0.349   -7.05e-11       2e-10
===================================================================================
Ljung-Box (L1) (Q):                  30.63   Jarque-Bera (JB):           6336314.18
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            13.86
Prob(H) (two-sided):                  0.00   Kurtosis:                       436.75
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.35e+27. Standard errors may be unstable.
ARIMA order: (3, 3, 2) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.06275, saving model to LSTM8.h5
43/43 - 5s - loss: 1.4435 - val_loss: 0.0627 - lr: 0.0010 - 5s/epoch - 123ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.06275
43/43 - 0s - loss: 1.3359 - val_loss: 0.0666 - lr: 0.0010 - 328ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.06275
43/43 - 0s - loss: 1.2476 - val_loss: 0.0714 - lr: 0.0010 - 312ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.06275
43/43 - 0s - loss: 1.1434 - val_loss: 0.0771 - lr: 0.0010 - 313ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.06275
43/43 - 0s - loss: 1.0163 - val_loss: 0.0844 - lr: 0.0010 - 339ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.9180 - val_loss: 0.0920 - lr: 0.0010 - 331ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8737 - val_loss: 0.0927 - lr: 1.0000e-04 - 336ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8680 - val_loss: 0.0934 - lr: 1.0000e-04 - 351ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8625 - val_loss: 0.0940 - lr: 1.0000e-04 - 320ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8572 - val_loss: 0.0947 - lr: 1.0000e-04 - 310ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8521 - val_loss: 0.0954 - lr: 1.0000e-04 - 348ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8489 - val_loss: 0.0955 - lr: 1.0000e-05 - 324ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8484 - val_loss: 0.0956 - lr: 1.0000e-05 - 316ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8479 - val_loss: 0.0956 - lr: 1.0000e-05 - 326ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8474 - val_loss: 0.0957 - lr: 1.0000e-05 - 312ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8469 - val_loss: 0.0958 - lr: 1.0000e-05 - 307ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8464 - val_loss: 0.0959 - lr: 1.0000e-05 - 332ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8458 - val_loss: 0.0959 - lr: 1.0000e-05 - 304ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8453 - val_loss: 0.0960 - lr: 1.0000e-05 - 330ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8448 - val_loss: 0.0961 - lr: 1.0000e-05 - 340ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8443 - val_loss: 0.0962 - lr: 1.0000e-05 - 313ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8438 - val_loss: 0.0963 - lr: 1.0000e-05 - 309ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8432 - val_loss: 0.0963 - lr: 1.0000e-05 - 340ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8427 - val_loss: 0.0964 - lr: 1.0000e-05 - 321ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8422 - val_loss: 0.0965 - lr: 1.0000e-05 - 316ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8417 - val_loss: 0.0966 - lr: 1.0000e-05 - 344ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8411 - val_loss: 0.0967 - lr: 1.0000e-05 - 347ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8406 - val_loss: 0.0967 - lr: 1.0000e-05 - 308ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8401 - val_loss: 0.0968 - lr: 1.0000e-05 - 338ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8395 - val_loss: 0.0969 - lr: 1.0000e-05 - 335ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8390 - val_loss: 0.0970 - lr: 1.0000e-05 - 329ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8385 - val_loss: 0.0971 - lr: 1.0000e-05 - 350ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8379 - val_loss: 0.0971 - lr: 1.0000e-05 - 326ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8374 - val_loss: 0.0972 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8369 - val_loss: 0.0973 - lr: 1.0000e-05 - 341ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8364 - val_loss: 0.0974 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8358 - val_loss: 0.0975 - lr: 1.0000e-05 - 332ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8353 - val_loss: 0.0975 - lr: 1.0000e-05 - 334ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8348 - val_loss: 0.0976 - lr: 1.0000e-05 - 336ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8342 - val_loss: 0.0977 - lr: 1.0000e-05 - 312ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8337 - val_loss: 0.0978 - lr: 1.0000e-05 - 348ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8332 - val_loss: 0.0979 - lr: 1.0000e-05 - 346ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8326 - val_loss: 0.0980 - lr: 1.0000e-05 - 320ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8321 - val_loss: 0.0980 - lr: 1.0000e-05 - 332ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8316 - val_loss: 0.0981 - lr: 1.0000e-05 - 346ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8310 - val_loss: 0.0982 - lr: 1.0000e-05 - 325ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8305 - val_loss: 0.0983 - lr: 1.0000e-05 - 352ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8300 - val_loss: 0.0984 - lr: 1.0000e-05 - 327ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8294 - val_loss: 0.0985 - lr: 1.0000e-05 - 320ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8289 - val_loss: 0.0985 - lr: 1.0000e-05 - 339ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.06275
43/43 - 0s - loss: 0.8284 - val_loss: 0.0986 - lr: 1.0000e-05 - 338ms/epoch - 8ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.581571013749205 
RMSE:	 4.8560859767665985 
MAPE:	 3.8329222497365705

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 36.33111760049725 
RMSE:	 6.027529975080776 
MAPE:	 4.717669821139163

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 47.98959198497618 
RMSE:	 6.927452055768859 
MAPE:	 5.543762493289533

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 147.1351003665105 
RMSE:	 12.129925818673026 
MAPE:	 10.86165140779962

KAMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	39.93% Accuracy
MSE:	 23.174387847088422 
RMSE:	 4.8139783804134835 
MAPE:	 3.8527324736855544

MIDPOINT
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 18.693186732823552 
RMSE:	 4.323561810917424 
MAPE:	 3.393872875098524

T3
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 49.37596815480495 
RMSE:	 7.026803551744203 
MAPE:	 5.656408399322556
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16996.849, Time=3.48 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14177.794, Time=2.13 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16779.945, Time=8.11 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14417.099, Time=11.56 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16996.773, Time=3.99 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-14470.746, Time=9.90 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16999.230, Time=4.03 sec
 ARIMA(0,3,3)(0,0,0)[0]             : AIC=-14413.099, Time=14.43 sec
 ARIMA(1,3,3)(0,0,0)[0]             : AIC=-16992.097, Time=4.89 sec
 ARIMA(0,3,2)(0,0,0)[0] intercept   : AIC=-16997.225, Time=3.86 sec

Best model:  ARIMA(0,3,2)(0,0,0)[0]          
Total fit time: 66.405 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 2)   Log Likelihood                8522.615
Date:                Mon, 13 Dec 2021   AIC                         -16999.230
Time:                        00:46:13   BIC                         -16891.341
Sample:                             0   HQIC                        -16957.796
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1           2.33e-15      0.001   2.87e-12      1.000      -0.002       0.002
x2         -4.502e-16      0.000  -1.15e-12      1.000      -0.001       0.001
x3          3.943e-17      0.001   5.53e-14      1.000      -0.001       0.001
x4             1.0000      0.001   1486.752      0.000       0.999       1.001
x5         -1.326e-14      0.001  -2.01e-11      1.000      -0.001       0.001
x6         -7.238e-16   6.02e-05   -1.2e-11      1.000      -0.000       0.000
x7          4.644e-16      0.000   1.63e-12      1.000      -0.001       0.001
x8            -0.0003   6.84e-05     -4.783      0.000      -0.000      -0.000
x9          4.956e-16      0.001   8.09e-13      1.000      -0.001       0.001
x10        -5.078e-05      0.000     -0.169      0.866      -0.001       0.001
x11            0.0005   8.52e-05      5.342      0.000       0.000       0.001
x12        -6.163e-05   6.76e-05     -0.912      0.362      -0.000    7.08e-05
x13        -6.225e-17      0.000  -1.81e-13      1.000      -0.001       0.001
x14         2.723e-16      0.000   1.71e-12      1.000      -0.000       0.000
x15         2.531e-13    9.1e-05   2.78e-09      1.000      -0.000       0.000
x16        -3.448e-13      0.000  -1.94e-09      1.000      -0.000       0.000
x17         1.188e-12      0.000   1.15e-08      1.000      -0.000       0.000
x18        -5.746e-14      0.000  -5.12e-10      1.000      -0.000       0.000
x19        -2.336e-13      0.000  -2.29e-09      1.000      -0.000       0.000
x20        -9.777e-15      0.000  -9.27e-11      1.000      -0.000       0.000
ma.L1         -1.3477   4.17e-08  -3.23e+07      0.000      -1.348      -1.348
ma.L2          0.3862   8.11e-08   4.76e+06      0.000       0.386       0.386
sigma2          1e-10   7.38e-11      1.355      0.175   -4.46e-11    2.45e-10
===================================================================================
Ljung-Box (L1) (Q):                  50.19   Jarque-Bera (JB):           4788158.62
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.04   Skew:                           -10.02
Prob(H) (two-sided):                  0.00   Kurtosis:                       380.29
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 6.4e+24. Standard errors may be unstable.
ARIMA order: (0, 3, 2) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.07323, saving model to LSTM8.h5
90/90 - 5s - loss: 1.3231 - val_loss: 0.0732 - lr: 0.0010 - 5s/epoch - 56ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.07323
90/90 - 1s - loss: 1.0695 - val_loss: 0.0754 - lr: 0.0010 - 627ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.8535 - val_loss: 0.0770 - lr: 0.0010 - 664ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.7450 - val_loss: 0.0842 - lr: 0.0010 - 610ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6998 - val_loss: 0.0924 - lr: 0.0010 - 661ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6726 - val_loss: 0.1007 - lr: 0.0010 - 677ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6599 - val_loss: 0.1015 - lr: 1.0000e-04 - 672ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6580 - val_loss: 0.1024 - lr: 1.0000e-04 - 608ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6561 - val_loss: 0.1033 - lr: 1.0000e-04 - 628ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6542 - val_loss: 0.1042 - lr: 1.0000e-04 - 656ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6523 - val_loss: 0.1053 - lr: 1.0000e-04 - 629ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6511 - val_loss: 0.1054 - lr: 1.0000e-05 - 670ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6509 - val_loss: 0.1055 - lr: 1.0000e-05 - 605ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6507 - val_loss: 0.1056 - lr: 1.0000e-05 - 693ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6505 - val_loss: 0.1057 - lr: 1.0000e-05 - 620ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6503 - val_loss: 0.1058 - lr: 1.0000e-05 - 614ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6500 - val_loss: 0.1060 - lr: 1.0000e-05 - 637ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6498 - val_loss: 0.1061 - lr: 1.0000e-05 - 605ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6496 - val_loss: 0.1062 - lr: 1.0000e-05 - 590ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6494 - val_loss: 0.1064 - lr: 1.0000e-05 - 629ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6491 - val_loss: 0.1065 - lr: 1.0000e-05 - 596ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6489 - val_loss: 0.1067 - lr: 1.0000e-05 - 713ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6486 - val_loss: 0.1068 - lr: 1.0000e-05 - 623ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6484 - val_loss: 0.1070 - lr: 1.0000e-05 - 648ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6481 - val_loss: 0.1072 - lr: 1.0000e-05 - 673ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6479 - val_loss: 0.1073 - lr: 1.0000e-05 - 627ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6476 - val_loss: 0.1075 - lr: 1.0000e-05 - 613ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6474 - val_loss: 0.1077 - lr: 1.0000e-05 - 626ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6471 - val_loss: 0.1079 - lr: 1.0000e-05 - 659ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6469 - val_loss: 0.1080 - lr: 1.0000e-05 - 661ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6466 - val_loss: 0.1082 - lr: 1.0000e-05 - 611ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6463 - val_loss: 0.1084 - lr: 1.0000e-05 - 610ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6461 - val_loss: 0.1086 - lr: 1.0000e-05 - 676ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6458 - val_loss: 0.1088 - lr: 1.0000e-05 - 668ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6455 - val_loss: 0.1090 - lr: 1.0000e-05 - 620ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6453 - val_loss: 0.1092 - lr: 1.0000e-05 - 677ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6450 - val_loss: 0.1094 - lr: 1.0000e-05 - 640ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6447 - val_loss: 0.1096 - lr: 1.0000e-05 - 608ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6444 - val_loss: 0.1098 - lr: 1.0000e-05 - 626ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6442 - val_loss: 0.1101 - lr: 1.0000e-05 - 626ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6439 - val_loss: 0.1103 - lr: 1.0000e-05 - 665ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6436 - val_loss: 0.1105 - lr: 1.0000e-05 - 676ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6434 - val_loss: 0.1107 - lr: 1.0000e-05 - 608ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6431 - val_loss: 0.1110 - lr: 1.0000e-05 - 606ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6428 - val_loss: 0.1112 - lr: 1.0000e-05 - 655ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6425 - val_loss: 0.1114 - lr: 1.0000e-05 - 656ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6423 - val_loss: 0.1117 - lr: 1.0000e-05 - 604ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6420 - val_loss: 0.1119 - lr: 1.0000e-05 - 632ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6417 - val_loss: 0.1121 - lr: 1.0000e-05 - 604ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6414 - val_loss: 0.1124 - lr: 1.0000e-05 - 624ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.07323
90/90 - 1s - loss: 0.6412 - val_loss: 0.1126 - lr: 1.0000e-05 - 655ms/epoch - 7ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 23.581571013749205 
RMSE:	 4.8560859767665985 
MAPE:	 3.8329222497365705

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 36.33111760049725 
RMSE:	 6.027529975080776 
MAPE:	 4.717669821139163

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 47.98959198497618 
RMSE:	 6.927452055768859 
MAPE:	 5.543762493289533

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 147.1351003665105 
RMSE:	 12.129925818673026 
MAPE:	 10.86165140779962

KAMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	39.93% Accuracy
MSE:	 23.174387847088422 
RMSE:	 4.8139783804134835 
MAPE:	 3.8527324736855544

MIDPOINT
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 18.693186732823552 
RMSE:	 4.323561810917424 
MAPE:	 3.393872875098524

T3
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 49.37596815480495 
RMSE:	 7.026803551744203 
MAPE:	 5.656408399322556

TEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 20.618939787159338 
RMSE:	 4.540808274653241 
MAPE:	 3.9138992194140063
Runtime: mins: 1.0905075958008335

Architecture Used

In [ ]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving Experiment8.png to Experiment8 (2).png
In [ ]:
img = cv2.imread('Experiment8.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[ ]:
<matplotlib.image.AxesImage at 0x7f4c3a22c210>

Model Plots

In [169]:
with open('simulation8_data.json') as json_file:
    simulation8 = json.load(json_file)
fileimg = 'Experiment8'
In [170]:
for i in range(len(list(simulation8.keys()))):
  SIM = list(simulation8.keys())[i]
  plot_train(simulation8,SIM)
  plot_test(simulation8,SIM)
----- Train RMSE for SMA ----- 21.459837351832487
----- Train_MSE_LSTM for SMA ----- 460.5246191671048
----- Train MAE LSTM for SMA ----- 21.456445802556406
----- Test RMSE for SMA----- 4.8560859767665985
----- Test_MSE_LSTM for SMA----- 23.581571013749205
----- Test_MAE_LSTM for SMA----- 3.8329222497365705
----- Train RMSE for EMA ----- 22.299931979787875
----- Train_MSE_LSTM for EMA ----- 497.2869663031659
----- Train MAE LSTM for EMA ----- 22.271449528118172
----- Test RMSE for EMA----- 6.027529975080776
----- Test_MSE_LSTM for EMA----- 36.33111760049725
----- Test_MAE_LSTM for EMA----- 4.717669821139163
----- Train RMSE for WMA ----- 25.36567259630859
----- Train_MSE_LSTM for WMA ----- 643.4173462631205
----- Train MAE LSTM for WMA ----- 25.36438252666209
----- Test RMSE for WMA----- 6.927452055768859
----- Test_MSE_LSTM for WMA----- 47.98959198497618
----- Test_MAE_LSTM for WMA----- 5.543762493289533
----- Train RMSE for DEMA ----- 28.627513883688447
----- Train_MSE_LSTM for DEMA ----- 819.5345511607749
----- Train MAE LSTM for DEMA ----- 28.627047009987407
----- Test RMSE for DEMA----- 12.129925818673026
----- Test_MSE_LSTM for DEMA----- 147.1351003665105
----- Test_MAE_LSTM for DEMA----- 10.86165140779962
----- Train RMSE for KAMA ----- 19.726968740783253
----- Train_MSE_LSTM for KAMA ----- 389.15329569983953
----- Train MAE LSTM for KAMA ----- 19.714890199132483
----- Test RMSE for KAMA----- 4.8139783804134835
----- Test_MSE_LSTM for KAMA----- 23.174387847088422
----- Test_MAE_LSTM for KAMA----- 3.8527324736855544
----- Train RMSE for MIDPOINT ----- 16.271251141384827
----- Train_MSE_LSTM for MIDPOINT ----- 264.75361370601706
----- Train MAE LSTM for MIDPOINT ----- 16.194465255973363
----- Test RMSE for MIDPOINT----- 4.323561810917424
----- Test_MSE_LSTM for MIDPOINT----- 18.693186732823552
----- Test_MAE_LSTM for MIDPOINT----- 3.393872875098524
----- Train RMSE for T3 ----- 19.716088793140845
----- Train_MSE_LSTM for T3 ----- 388.72415729901405
----- Train MAE LSTM for T3 ----- 19.675862314677474
----- Test RMSE for T3----- 7.026803551744203
----- Test_MSE_LSTM for T3----- 49.37596815480495
----- Test_MAE_LSTM for T3----- 5.656408399322556
----- Train RMSE for TEMA ----- 18.552481299317044
----- Train_MSE_LSTM for TEMA ----- 344.19456236150864
----- Train MAE LSTM for TEMA ----- 18.50773582364073
----- Test RMSE for TEMA----- 4.540808274653241
----- Test_MSE_LSTM for TEMA----- 20.618939787159338
----- Test_MAE_LSTM for TEMA----- 3.9138992194140063
In [ ]:

List of RMSE, MSE & MAE scores for Test data

In [171]:
import json
with open('simulation1_data.json') as json_file:
    simulation1 = json.load(json_file)

with open('simulation2_data.json') as json_file:
    simulation2 = json.load(json_file)

with open('simulation3_data.json') as json_file:
    simulation3 = json.load(json_file)

with open('simulation4_data.json') as json_file:
    simulation4 = json.load(json_file)

with open('simulation5_data.json') as json_file:
    simulation5 = json.load(json_file)

with open('simulation6_data.json') as json_file:
    simulation6 = json.load(json_file)

with open('simulation7_data.json') as json_file:
    simulation7 = json.load(json_file)

with open('simulation8_data.json') as json_file:
    simulation8 = json.load(json_file)
In [172]:
simulations = [simulation1,simulation2,simulation3,simulation4,simulation5,simulation6,simulation7,simulation8]
for i,simulation in enumerate(simulations):
  for ma in simulation.keys():
    print('Experiment ',i+1,' for MA :',ma,'the MSE  is: ',simulation[SIM]['final']['mse'])
    print('Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[SIM]['final']['rmse'])
    print('Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[SIM]['final']['mae'])
Experiment  1  for MA : SMA the MSE  is:  33.21332596254529
Experiment  1  for MA : SMA the RMSE is:  5.763100377621866
Experiment  1  for MA : SMA the MAE is:  4.8410208542529105
Experiment  1  for MA : EMA the MSE  is:  33.21332596254529
Experiment  1  for MA : EMA the RMSE is:  5.763100377621866
Experiment  1  for MA : EMA the MAE is:  4.8410208542529105
Experiment  1  for MA : WMA the MSE  is:  33.21332596254529
Experiment  1  for MA : WMA the RMSE is:  5.763100377621866
Experiment  1  for MA : WMA the MAE is:  4.8410208542529105
Experiment  1  for MA : DEMA the MSE  is:  33.21332596254529
Experiment  1  for MA : DEMA the RMSE is:  5.763100377621866
Experiment  1  for MA : DEMA the MAE is:  4.8410208542529105
Experiment  1  for MA : KAMA the MSE  is:  33.21332596254529
Experiment  1  for MA : KAMA the RMSE is:  5.763100377621866
Experiment  1  for MA : KAMA the MAE is:  4.8410208542529105
Experiment  1  for MA : MIDPOINT the MSE  is:  33.21332596254529
Experiment  1  for MA : MIDPOINT the RMSE is:  5.763100377621866
Experiment  1  for MA : MIDPOINT the MAE is:  4.8410208542529105
Experiment  1  for MA : T3 the MSE  is:  33.21332596254529
Experiment  1  for MA : T3 the RMSE is:  5.763100377621866
Experiment  1  for MA : T3 the MAE is:  4.8410208542529105
Experiment  1  for MA : TEMA the MSE  is:  33.21332596254529
Experiment  1  for MA : TEMA the RMSE is:  5.763100377621866
Experiment  1  for MA : TEMA the MAE is:  4.8410208542529105
Experiment  2  for MA : SMA the MSE  is:  52.954444549979364
Experiment  2  for MA : SMA the RMSE is:  7.2769804555172035
Experiment  2  for MA : SMA the MAE is:  6.363917319886005
Experiment  2  for MA : EMA the MSE  is:  52.954444549979364
Experiment  2  for MA : EMA the RMSE is:  7.2769804555172035
Experiment  2  for MA : EMA the MAE is:  6.363917319886005
Experiment  2  for MA : WMA the MSE  is:  52.954444549979364
Experiment  2  for MA : WMA the RMSE is:  7.2769804555172035
Experiment  2  for MA : WMA the MAE is:  6.363917319886005
Experiment  2  for MA : DEMA the MSE  is:  52.954444549979364
Experiment  2  for MA : DEMA the RMSE is:  7.2769804555172035
Experiment  2  for MA : DEMA the MAE is:  6.363917319886005
Experiment  2  for MA : KAMA the MSE  is:  52.954444549979364
Experiment  2  for MA : KAMA the RMSE is:  7.2769804555172035
Experiment  2  for MA : KAMA the MAE is:  6.363917319886005
Experiment  2  for MA : MIDPOINT the MSE  is:  52.954444549979364
Experiment  2  for MA : MIDPOINT the RMSE is:  7.2769804555172035
Experiment  2  for MA : MIDPOINT the MAE is:  6.363917319886005
Experiment  2  for MA : T3 the MSE  is:  52.954444549979364
Experiment  2  for MA : T3 the RMSE is:  7.2769804555172035
Experiment  2  for MA : T3 the MAE is:  6.363917319886005
Experiment  2  for MA : TEMA the MSE  is:  52.954444549979364
Experiment  2  for MA : TEMA the RMSE is:  7.2769804555172035
Experiment  2  for MA : TEMA the MAE is:  6.363917319886005
Experiment  3  for MA : SMA the MSE  is:  61.72382816732521
Experiment  3  for MA : SMA the RMSE is:  7.856451372427962
Experiment  3  for MA : SMA the MAE is:  7.166001671992897
Experiment  3  for MA : EMA the MSE  is:  61.72382816732521
Experiment  3  for MA : EMA the RMSE is:  7.856451372427962
Experiment  3  for MA : EMA the MAE is:  7.166001671992897
Experiment  3  for MA : WMA the MSE  is:  61.72382816732521
Experiment  3  for MA : WMA the RMSE is:  7.856451372427962
Experiment  3  for MA : WMA the MAE is:  7.166001671992897
Experiment  3  for MA : DEMA the MSE  is:  61.72382816732521
Experiment  3  for MA : DEMA the RMSE is:  7.856451372427962
Experiment  3  for MA : DEMA the MAE is:  7.166001671992897
Experiment  3  for MA : KAMA the MSE  is:  61.72382816732521
Experiment  3  for MA : KAMA the RMSE is:  7.856451372427962
Experiment  3  for MA : KAMA the MAE is:  7.166001671992897
Experiment  3  for MA : MIDPOINT the MSE  is:  61.72382816732521
Experiment  3  for MA : MIDPOINT the RMSE is:  7.856451372427962
Experiment  3  for MA : MIDPOINT the MAE is:  7.166001671992897
Experiment  3  for MA : T3 the MSE  is:  61.72382816732521
Experiment  3  for MA : T3 the RMSE is:  7.856451372427962
Experiment  3  for MA : T3 the MAE is:  7.166001671992897
Experiment  3  for MA : TEMA the MSE  is:  61.72382816732521
Experiment  3  for MA : TEMA the RMSE is:  7.856451372427962
Experiment  3  for MA : TEMA the MAE is:  7.166001671992897
Experiment  4  for MA : SMA the MSE  is:  22.406031022375306
Experiment  4  for MA : SMA the RMSE is:  4.733500926626645
Experiment  4  for MA : SMA the MAE is:  4.170481392757424
Experiment  4  for MA : EMA the MSE  is:  22.406031022375306
Experiment  4  for MA : EMA the RMSE is:  4.733500926626645
Experiment  4  for MA : EMA the MAE is:  4.170481392757424
Experiment  4  for MA : WMA the MSE  is:  22.406031022375306
Experiment  4  for MA : WMA the RMSE is:  4.733500926626645
Experiment  4  for MA : WMA the MAE is:  4.170481392757424
Experiment  4  for MA : DEMA the MSE  is:  22.406031022375306
Experiment  4  for MA : DEMA the RMSE is:  4.733500926626645
Experiment  4  for MA : DEMA the MAE is:  4.170481392757424
Experiment  4  for MA : KAMA the MSE  is:  22.406031022375306
Experiment  4  for MA : KAMA the RMSE is:  4.733500926626645
Experiment  4  for MA : KAMA the MAE is:  4.170481392757424
Experiment  4  for MA : MIDPOINT the MSE  is:  22.406031022375306
Experiment  4  for MA : MIDPOINT the RMSE is:  4.733500926626645
Experiment  4  for MA : MIDPOINT the MAE is:  4.170481392757424
Experiment  4  for MA : T3 the MSE  is:  22.406031022375306
Experiment  4  for MA : T3 the RMSE is:  4.733500926626645
Experiment  4  for MA : T3 the MAE is:  4.170481392757424
Experiment  4  for MA : TEMA the MSE  is:  22.406031022375306
Experiment  4  for MA : TEMA the RMSE is:  4.733500926626645
Experiment  4  for MA : TEMA the MAE is:  4.170481392757424
Experiment  5  for MA : SMA the MSE  is:  54.19993303236338
Experiment  5  for MA : SMA the RMSE is:  7.362060379565179
Experiment  5  for MA : SMA the MAE is:  6.351731050288764
Experiment  5  for MA : EMA the MSE  is:  54.19993303236338
Experiment  5  for MA : EMA the RMSE is:  7.362060379565179
Experiment  5  for MA : EMA the MAE is:  6.351731050288764
Experiment  5  for MA : WMA the MSE  is:  54.19993303236338
Experiment  5  for MA : WMA the RMSE is:  7.362060379565179
Experiment  5  for MA : WMA the MAE is:  6.351731050288764
Experiment  5  for MA : DEMA the MSE  is:  54.19993303236338
Experiment  5  for MA : DEMA the RMSE is:  7.362060379565179
Experiment  5  for MA : DEMA the MAE is:  6.351731050288764
Experiment  5  for MA : KAMA the MSE  is:  54.19993303236338
Experiment  5  for MA : KAMA the RMSE is:  7.362060379565179
Experiment  5  for MA : KAMA the MAE is:  6.351731050288764
Experiment  5  for MA : MIDPOINT the MSE  is:  54.19993303236338
Experiment  5  for MA : MIDPOINT the RMSE is:  7.362060379565179
Experiment  5  for MA : MIDPOINT the MAE is:  6.351731050288764
Experiment  5  for MA : T3 the MSE  is:  54.19993303236338
Experiment  5  for MA : T3 the RMSE is:  7.362060379565179
Experiment  5  for MA : T3 the MAE is:  6.351731050288764
Experiment  5  for MA : TEMA the MSE  is:  54.19993303236338
Experiment  5  for MA : TEMA the RMSE is:  7.362060379565179
Experiment  5  for MA : TEMA the MAE is:  6.351731050288764
Experiment  6  for MA : SMA the MSE  is:  60.21739561493253
Experiment  6  for MA : SMA the RMSE is:  7.75998683084788
Experiment  6  for MA : SMA the MAE is:  6.775089256594788
Experiment  6  for MA : EMA the MSE  is:  60.21739561493253
Experiment  6  for MA : EMA the RMSE is:  7.75998683084788
Experiment  6  for MA : EMA the MAE is:  6.775089256594788
Experiment  6  for MA : WMA the MSE  is:  60.21739561493253
Experiment  6  for MA : WMA the RMSE is:  7.75998683084788
Experiment  6  for MA : WMA the MAE is:  6.775089256594788
Experiment  6  for MA : DEMA the MSE  is:  60.21739561493253
Experiment  6  for MA : DEMA the RMSE is:  7.75998683084788
Experiment  6  for MA : DEMA the MAE is:  6.775089256594788
Experiment  6  for MA : KAMA the MSE  is:  60.21739561493253
Experiment  6  for MA : KAMA the RMSE is:  7.75998683084788
Experiment  6  for MA : KAMA the MAE is:  6.775089256594788
Experiment  6  for MA : MIDPOINT the MSE  is:  60.21739561493253
Experiment  6  for MA : MIDPOINT the RMSE is:  7.75998683084788
Experiment  6  for MA : MIDPOINT the MAE is:  6.775089256594788
Experiment  6  for MA : T3 the MSE  is:  60.21739561493253
Experiment  6  for MA : T3 the RMSE is:  7.75998683084788
Experiment  6  for MA : T3 the MAE is:  6.775089256594788
Experiment  6  for MA : TEMA the MSE  is:  60.21739561493253
Experiment  6  for MA : TEMA the RMSE is:  7.75998683084788
Experiment  6  for MA : TEMA the MAE is:  6.775089256594788
Experiment  7  for MA : SMA the MSE  is:  63.76180736926059
Experiment  7  for MA : SMA the RMSE is:  7.98509908324628
Experiment  7  for MA : SMA the MAE is:  7.313429173529297
Experiment  7  for MA : EMA the MSE  is:  63.76180736926059
Experiment  7  for MA : EMA the RMSE is:  7.98509908324628
Experiment  7  for MA : EMA the MAE is:  7.313429173529297
Experiment  7  for MA : WMA the MSE  is:  63.76180736926059
Experiment  7  for MA : WMA the RMSE is:  7.98509908324628
Experiment  7  for MA : WMA the MAE is:  7.313429173529297
Experiment  7  for MA : DEMA the MSE  is:  63.76180736926059
Experiment  7  for MA : DEMA the RMSE is:  7.98509908324628
Experiment  7  for MA : DEMA the MAE is:  7.313429173529297
Experiment  7  for MA : KAMA the MSE  is:  63.76180736926059
Experiment  7  for MA : KAMA the RMSE is:  7.98509908324628
Experiment  7  for MA : KAMA the MAE is:  7.313429173529297
Experiment  7  for MA : MIDPOINT the MSE  is:  63.76180736926059
Experiment  7  for MA : MIDPOINT the RMSE is:  7.98509908324628
Experiment  7  for MA : MIDPOINT the MAE is:  7.313429173529297
Experiment  7  for MA : T3 the MSE  is:  63.76180736926059
Experiment  7  for MA : T3 the RMSE is:  7.98509908324628
Experiment  7  for MA : T3 the MAE is:  7.313429173529297
Experiment  7  for MA : TEMA the MSE  is:  63.76180736926059
Experiment  7  for MA : TEMA the RMSE is:  7.98509908324628
Experiment  7  for MA : TEMA the MAE is:  7.313429173529297
Experiment  8  for MA : SMA the MSE  is:  20.618939787159338
Experiment  8  for MA : SMA the RMSE is:  4.540808274653241
Experiment  8  for MA : SMA the MAE is:  3.9138992194140063
Experiment  8  for MA : EMA the MSE  is:  20.618939787159338
Experiment  8  for MA : EMA the RMSE is:  4.540808274653241
Experiment  8  for MA : EMA the MAE is:  3.9138992194140063
Experiment  8  for MA : WMA the MSE  is:  20.618939787159338
Experiment  8  for MA : WMA the RMSE is:  4.540808274653241
Experiment  8  for MA : WMA the MAE is:  3.9138992194140063
Experiment  8  for MA : DEMA the MSE  is:  20.618939787159338
Experiment  8  for MA : DEMA the RMSE is:  4.540808274653241
Experiment  8  for MA : DEMA the MAE is:  3.9138992194140063
Experiment  8  for MA : KAMA the MSE  is:  20.618939787159338
Experiment  8  for MA : KAMA the RMSE is:  4.540808274653241
Experiment  8  for MA : KAMA the MAE is:  3.9138992194140063
Experiment  8  for MA : MIDPOINT the MSE  is:  20.618939787159338
Experiment  8  for MA : MIDPOINT the RMSE is:  4.540808274653241
Experiment  8  for MA : MIDPOINT the MAE is:  3.9138992194140063
Experiment  8  for MA : T3 the MSE  is:  20.618939787159338
Experiment  8  for MA : T3 the RMSE is:  4.540808274653241
Experiment  8  for MA : T3 the MAE is:  3.9138992194140063
Experiment  8  for MA : TEMA the MSE  is:  20.618939787159338
Experiment  8  for MA : TEMA the RMSE is:  4.540808274653241
Experiment  8  for MA : TEMA the MAE is:  3.9138992194140063
In [173]:
text = 'Stock with Baseline '
simulations = [simulation1,simulation2,simulation3,simulation4,simulation5,simulation6,simulation7,simulation8]
for i,simulation in enumerate(simulations):
  for ma in simulation.keys():
    # print(text+'Experiment ',i+1,' for MA :',ma,'the MSE  is: ',simulation[ma]['final']['mse'])
    print(text+'Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[ma]['final']['rmse'])
    # print(text+'Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[ma]['final']['mae'])
  for ma in simulation.keys():
    print(text+'Experiment ',i+1,' for MA :',ma,'the MSE  is: ',simulation[ma]['final']['mse'])
    # print(text+'Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[ma]['final']['rmse'])
    # print(text+'Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[ma]['final']['mae'])
  for ma in simulation.keys():
    # print(text+'Experiment ',i+1,' for MA :',ma,'the MSE  is: ',simulation[ma]['final']['mse'])
    # print(text+'Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[ma]['final']['rmse'])
    print(text+'Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[ma]['final']['mae'])
Stock with Baseline Experiment  1  for MA : SMA the RMSE is:  6.068911917484476
Stock with Baseline Experiment  1  for MA : EMA the RMSE is:  8.465370483543822
Stock with Baseline Experiment  1  for MA : WMA the RMSE is:  7.359939097954609
Stock with Baseline Experiment  1  for MA : DEMA the RMSE is:  11.047191433294731
Stock with Baseline Experiment  1  for MA : KAMA the RMSE is:  6.759014190816157
Stock with Baseline Experiment  1  for MA : MIDPOINT the RMSE is:  6.965410649911217
Stock with Baseline Experiment  1  for MA : T3 the RMSE is:  14.019217214863545
Stock with Baseline Experiment  1  for MA : TEMA the RMSE is:  5.763100377621866
Stock with Baseline Experiment  1  for MA : SMA the MSE  is:  36.8316918621851
Stock with Baseline Experiment  1  for MA : EMA the MSE  is:  71.66249742365495
Stock with Baseline Experiment  1  for MA : WMA the MSE  is:  54.1687035256009
Stock with Baseline Experiment  1  for MA : DEMA the MSE  is:  122.04043856386052
Stock with Baseline Experiment  1  for MA : KAMA the MSE  is:  45.6842728316542
Stock with Baseline Experiment  1  for MA : MIDPOINT the MSE  is:  48.516945521896595
Stock with Baseline Experiment  1  for MA : T3 the MSE  is:  196.53845131752638
Stock with Baseline Experiment  1  for MA : TEMA the MSE  is:  33.21332596254529
Stock with Baseline Experiment  1  for MA : SMA the MAE is:  5.0566072358251795
Stock with Baseline Experiment  1  for MA : EMA the MAE is:  6.945165499596413
Stock with Baseline Experiment  1  for MA : WMA the MAE is:  6.023882427496638
Stock with Baseline Experiment  1  for MA : DEMA the MAE is:  9.094033094452044
Stock with Baseline Experiment  1  for MA : KAMA the MAE is:  5.403969846039035
Stock with Baseline Experiment  1  for MA : MIDPOINT the MAE is:  5.700028346794376
Stock with Baseline Experiment  1  for MA : T3 the MAE is:  11.19411588232732
Stock with Baseline Experiment  1  for MA : TEMA the MAE is:  4.8410208542529105
Stock with Baseline Experiment  2  for MA : SMA the RMSE is:  7.330062003435954
Stock with Baseline Experiment  2  for MA : EMA the RMSE is:  7.4710816577634125
Stock with Baseline Experiment  2  for MA : WMA the RMSE is:  7.692120162847206
Stock with Baseline Experiment  2  for MA : DEMA the RMSE is:  11.031000399787564
Stock with Baseline Experiment  2  for MA : KAMA the RMSE is:  7.669321017430123
Stock with Baseline Experiment  2  for MA : MIDPOINT the RMSE is:  8.109611894467204
Stock with Baseline Experiment  2  for MA : T3 the RMSE is:  10.048465542477471
Stock with Baseline Experiment  2  for MA : TEMA the RMSE is:  7.2769804555172035
Stock with Baseline Experiment  2  for MA : SMA the MSE  is:  53.7298089742155
Stock with Baseline Experiment  2  for MA : EMA the MSE  is:  55.817061136968896
Stock with Baseline Experiment  2  for MA : WMA the MSE  is:  59.16871259968053
Stock with Baseline Experiment  2  for MA : DEMA the MSE  is:  121.68296982011337
Stock with Baseline Experiment  2  for MA : KAMA the MSE  is:  58.818484868395416
Stock with Baseline Experiment  2  for MA : MIDPOINT the MSE  is:  65.76580507888394
Stock with Baseline Experiment  2  for MA : T3 the MSE  is:  100.97165975835705
Stock with Baseline Experiment  2  for MA : TEMA the MSE  is:  52.954444549979364
Stock with Baseline Experiment  2  for MA : SMA the MAE is:  5.9334980672321835
Stock with Baseline Experiment  2  for MA : EMA the MAE is:  6.256840285842
Stock with Baseline Experiment  2  for MA : WMA the MAE is:  6.209776446209727
Stock with Baseline Experiment  2  for MA : DEMA the MAE is:  9.966678894256134
Stock with Baseline Experiment  2  for MA : KAMA the MAE is:  6.375419548070835
Stock with Baseline Experiment  2  for MA : MIDPOINT the MAE is:  6.6791438644890375
Stock with Baseline Experiment  2  for MA : T3 the MAE is:  8.008063201476219
Stock with Baseline Experiment  2  for MA : TEMA the MAE is:  6.363917319886005
Stock with Baseline Experiment  3  for MA : SMA the RMSE is:  4.321360184570081
Stock with Baseline Experiment  3  for MA : EMA the RMSE is:  7.437993019736462
Stock with Baseline Experiment  3  for MA : WMA the RMSE is:  7.1309103580307545
Stock with Baseline Experiment  3  for MA : DEMA the RMSE is:  6.144216052305218
Stock with Baseline Experiment  3  for MA : KAMA the RMSE is:  6.034609740001348
Stock with Baseline Experiment  3  for MA : MIDPOINT the RMSE is:  11.927844685264063
Stock with Baseline Experiment  3  for MA : T3 the RMSE is:  6.135206291200104
Stock with Baseline Experiment  3  for MA : TEMA the RMSE is:  7.856451372427962
Stock with Baseline Experiment  3  for MA : SMA the MSE  is:  18.67415384478757
Stock with Baseline Experiment  3  for MA : EMA the MSE  is:  55.32374016164833
Stock with Baseline Experiment  3  for MA : WMA the MSE  is:  50.84988253427031
Stock with Baseline Experiment  3  for MA : DEMA the MSE  is:  37.751390897405116
Stock with Baseline Experiment  3  for MA : KAMA the MSE  is:  36.41651471411913
Stock with Baseline Experiment  3  for MA : MIDPOINT the MSE  is:  142.27347883578213
Stock with Baseline Experiment  3  for MA : T3 the MSE  is:  37.64075623558134
Stock with Baseline Experiment  3  for MA : TEMA the MSE  is:  61.72382816732521
Stock with Baseline Experiment  3  for MA : SMA the MAE is:  3.534296685764838
Stock with Baseline Experiment  3  for MA : EMA the MAE is:  6.054411328729787
Stock with Baseline Experiment  3  for MA : WMA the MAE is:  5.537694007766219
Stock with Baseline Experiment  3  for MA : DEMA the MAE is:  4.610910381239713
Stock with Baseline Experiment  3  for MA : KAMA the MAE is:  4.797119170641582
Stock with Baseline Experiment  3  for MA : MIDPOINT the MAE is:  10.30348805298139
Stock with Baseline Experiment  3  for MA : T3 the MAE is:  5.0145827751195515
Stock with Baseline Experiment  3  for MA : TEMA the MAE is:  7.166001671992897
Stock with Baseline Experiment  4  for MA : SMA the RMSE is:  5.053532448778641
Stock with Baseline Experiment  4  for MA : EMA the RMSE is:  6.105176365725656
Stock with Baseline Experiment  4  for MA : WMA the RMSE is:  7.454787287347779
Stock with Baseline Experiment  4  for MA : DEMA the RMSE is:  11.910511331344482
Stock with Baseline Experiment  4  for MA : KAMA the RMSE is:  4.795504786575025
Stock with Baseline Experiment  4  for MA : MIDPOINT the RMSE is:  4.091793872710265
Stock with Baseline Experiment  4  for MA : T3 the RMSE is:  7.788120756826606
Stock with Baseline Experiment  4  for MA : TEMA the RMSE is:  4.733500926626645
Stock with Baseline Experiment  4  for MA : SMA the MSE  is:  25.538190210858644
Stock with Baseline Experiment  4  for MA : EMA the MSE  is:  37.273178456615135
Stock with Baseline Experiment  4  for MA : WMA the MSE  is:  55.57385349960206
Stock with Baseline Experiment  4  for MA : DEMA the MSE  is:  141.86028017408532
Stock with Baseline Experiment  4  for MA : KAMA the MSE  is:  22.996866158063977
Stock with Baseline Experiment  4  for MA : MIDPOINT the MSE  is:  16.742777096749265
Stock with Baseline Experiment  4  for MA : T3 the MSE  is:  60.65482492291343
Stock with Baseline Experiment  4  for MA : TEMA the MSE  is:  22.406031022375306
Stock with Baseline Experiment  4  for MA : SMA the MAE is:  3.962093187177072
Stock with Baseline Experiment  4  for MA : EMA the MAE is:  4.793024214680565
Stock with Baseline Experiment  4  for MA : WMA the MAE is:  6.073522122630416
Stock with Baseline Experiment  4  for MA : DEMA the MAE is:  10.597824101027992
Stock with Baseline Experiment  4  for MA : KAMA the MAE is:  3.79818199604115
Stock with Baseline Experiment  4  for MA : MIDPOINT the MAE is:  3.2612983803063513
Stock with Baseline Experiment  4  for MA : T3 the MAE is:  6.228183408991355
Stock with Baseline Experiment  4  for MA : TEMA the MAE is:  4.170481392757424
Stock with Baseline Experiment  5  for MA : SMA the RMSE is:  14.79349257884314
Stock with Baseline Experiment  5  for MA : EMA the RMSE is:  7.470481673221836
Stock with Baseline Experiment  5  for MA : WMA the RMSE is:  5.452768085391956
Stock with Baseline Experiment  5  for MA : DEMA the RMSE is:  14.757131671057321
Stock with Baseline Experiment  5  for MA : KAMA the RMSE is:  6.474984495388551
Stock with Baseline Experiment  5  for MA : MIDPOINT the RMSE is:  7.012607044664768
Stock with Baseline Experiment  5  for MA : T3 the RMSE is:  16.359580983596892
Stock with Baseline Experiment  5  for MA : TEMA the RMSE is:  7.362060379565179
Stock with Baseline Experiment  5  for MA : SMA the MSE  is:  218.84742268028705
Stock with Baseline Experiment  5  for MA : EMA the MSE  is:  55.80809642994332
Stock with Baseline Experiment  5  for MA : WMA the MSE  is:  29.732679793069057
Stock with Baseline Experiment  5  for MA : DEMA the MSE  is:  217.77293515692304
Stock with Baseline Experiment  5  for MA : KAMA the MSE  is:  41.92542421552212
Stock with Baseline Experiment  5  for MA : MIDPOINT the MSE  is:  49.176657562881935
Stock with Baseline Experiment  5  for MA : T3 the MSE  is:  267.63588995886505
Stock with Baseline Experiment  5  for MA : TEMA the MSE  is:  54.19993303236338
Stock with Baseline Experiment  5  for MA : SMA the MAE is:  12.049823582857737
Stock with Baseline Experiment  5  for MA : EMA the MAE is:  6.155377787606487
Stock with Baseline Experiment  5  for MA : WMA the MAE is:  4.481765047502752
Stock with Baseline Experiment  5  for MA : DEMA the MAE is:  12.98122268289883
Stock with Baseline Experiment  5  for MA : KAMA the MAE is:  5.290774580380154
Stock with Baseline Experiment  5  for MA : MIDPOINT the MAE is:  5.71406626958764
Stock with Baseline Experiment  5  for MA : T3 the MAE is:  13.903459389418241
Stock with Baseline Experiment  5  for MA : TEMA the MAE is:  6.351731050288764
Stock with Baseline Experiment  6  for MA : SMA the RMSE is:  7.990877919666713
Stock with Baseline Experiment  6  for MA : EMA the RMSE is:  7.7463707319588195
Stock with Baseline Experiment  6  for MA : WMA the RMSE is:  7.529732786578689
Stock with Baseline Experiment  6  for MA : DEMA the RMSE is:  10.752779937032262
Stock with Baseline Experiment  6  for MA : KAMA the RMSE is:  6.141462247970415
Stock with Baseline Experiment  6  for MA : MIDPOINT the RMSE is:  7.680402859076942
Stock with Baseline Experiment  6  for MA : T3 the RMSE is:  12.02585308012629
Stock with Baseline Experiment  6  for MA : TEMA the RMSE is:  7.75998683084788
Stock with Baseline Experiment  6  for MA : SMA the MSE  is:  63.854129927017006
Stock with Baseline Experiment  6  for MA : EMA the MSE  is:  60.00625951694821
Stock with Baseline Experiment  6  for MA : WMA the MSE  is:  56.69687583727807
Stock with Baseline Experiment  6  for MA : DEMA the MSE  is:  115.62227637424353
Stock with Baseline Experiment  6  for MA : KAMA the MSE  is:  37.71755854324582
Stock with Baseline Experiment  6  for MA : MIDPOINT the MSE  is:  58.98858807771727
Stock with Baseline Experiment  6  for MA : T3 the MSE  is:  144.62114230478295
Stock with Baseline Experiment  6  for MA : TEMA the MSE  is:  60.21739561493253
Stock with Baseline Experiment  6  for MA : SMA the MAE is:  6.455960052106778
Stock with Baseline Experiment  6  for MA : EMA the MAE is:  6.477662803945572
Stock with Baseline Experiment  6  for MA : WMA the MAE is:  6.079114892920341
Stock with Baseline Experiment  6  for MA : DEMA the MAE is:  9.572797111202712
Stock with Baseline Experiment  6  for MA : KAMA the MAE is:  4.988005540782208
Stock with Baseline Experiment  6  for MA : MIDPOINT the MAE is:  6.28601448834674
Stock with Baseline Experiment  6  for MA : T3 the MAE is:  9.865104713742154
Stock with Baseline Experiment  6  for MA : TEMA the MAE is:  6.775089256594788
Stock with Baseline Experiment  7  for MA : SMA the RMSE is:  7.225014659169763
Stock with Baseline Experiment  7  for MA : EMA the RMSE is:  7.960508327463522
Stock with Baseline Experiment  7  for MA : WMA the RMSE is:  8.462665633087392
Stock with Baseline Experiment  7  for MA : DEMA the RMSE is:  12.99330390281423
Stock with Baseline Experiment  7  for MA : KAMA the RMSE is:  8.324390699154408
Stock with Baseline Experiment  7  for MA : MIDPOINT the RMSE is:  4.504130817051618
Stock with Baseline Experiment  7  for MA : T3 the RMSE is:  7.1055894968451945
Stock with Baseline Experiment  7  for MA : TEMA the RMSE is:  7.98509908324628
Stock with Baseline Experiment  7  for MA : SMA the MSE  is:  52.20083682521797
Stock with Baseline Experiment  7  for MA : EMA the MSE  is:  63.36969283161608
Stock with Baseline Experiment  7  for MA : WMA the MSE  is:  71.61670961743842
Stock with Baseline Experiment  7  for MA : DEMA the MSE  is:  168.8259463108875
Stock with Baseline Experiment  7  for MA : KAMA the MSE  is:  69.29548051216841
Stock with Baseline Experiment  7  for MA : MIDPOINT the MSE  is:  20.287194417114076
Stock with Baseline Experiment  7  for MA : T3 the MSE  is:  50.48940209767674
Stock with Baseline Experiment  7  for MA : TEMA the MSE  is:  63.76180736926059
Stock with Baseline Experiment  7  for MA : SMA the MAE is:  5.885308101636885
Stock with Baseline Experiment  7  for MA : EMA the MAE is:  6.712143148682283
Stock with Baseline Experiment  7  for MA : WMA the MAE is:  6.541482431642796
Stock with Baseline Experiment  7  for MA : DEMA the MAE is:  11.83342735128442
Stock with Baseline Experiment  7  for MA : KAMA the MAE is:  6.942234066695414
Stock with Baseline Experiment  7  for MA : MIDPOINT the MAE is:  3.6982533278355527
Stock with Baseline Experiment  7  for MA : T3 the MAE is:  5.703419596047134
Stock with Baseline Experiment  7  for MA : TEMA the MAE is:  7.313429173529297
Stock with Baseline Experiment  8  for MA : SMA the RMSE is:  4.8560859767665985
Stock with Baseline Experiment  8  for MA : EMA the RMSE is:  6.027529975080776
Stock with Baseline Experiment  8  for MA : WMA the RMSE is:  6.927452055768859
Stock with Baseline Experiment  8  for MA : DEMA the RMSE is:  12.129925818673026
Stock with Baseline Experiment  8  for MA : KAMA the RMSE is:  4.8139783804134835
Stock with Baseline Experiment  8  for MA : MIDPOINT the RMSE is:  4.323561810917424
Stock with Baseline Experiment  8  for MA : T3 the RMSE is:  7.026803551744203
Stock with Baseline Experiment  8  for MA : TEMA the RMSE is:  4.540808274653241
Stock with Baseline Experiment  8  for MA : SMA the MSE  is:  23.581571013749205
Stock with Baseline Experiment  8  for MA : EMA the MSE  is:  36.33111760049725
Stock with Baseline Experiment  8  for MA : WMA the MSE  is:  47.98959198497618
Stock with Baseline Experiment  8  for MA : DEMA the MSE  is:  147.1351003665105
Stock with Baseline Experiment  8  for MA : KAMA the MSE  is:  23.174387847088422
Stock with Baseline Experiment  8  for MA : MIDPOINT the MSE  is:  18.693186732823552
Stock with Baseline Experiment  8  for MA : T3 the MSE  is:  49.37596815480495
Stock with Baseline Experiment  8  for MA : TEMA the MSE  is:  20.618939787159338
Stock with Baseline Experiment  8  for MA : SMA the MAE is:  3.8329222497365705
Stock with Baseline Experiment  8  for MA : EMA the MAE is:  4.717669821139163
Stock with Baseline Experiment  8  for MA : WMA the MAE is:  5.543762493289533
Stock with Baseline Experiment  8  for MA : DEMA the MAE is:  10.86165140779962
Stock with Baseline Experiment  8  for MA : KAMA the MAE is:  3.8527324736855544
Stock with Baseline Experiment  8  for MA : MIDPOINT the MAE is:  3.393872875098524
Stock with Baseline Experiment  8  for MA : T3 the MAE is:  5.656408399322556
Stock with Baseline Experiment  8  for MA : TEMA the MAE is:  3.9138992194140063

Create HTML

In [176]:
cd ..
/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Archana - LSTM Hybrid/Outputs
In [ ]:
%%shell
jupyter nbconvert --to html LSTM_Hybrid_using_TA_LIB_Baseline.ipynb